- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2024-09-08
Robert Grupe's AppSecNewsBits 2024-09-08
This week’s news roundup newsletter Epic Fails: Disney, US Navy, NH Elections Offshoring, biz Verkada CCTVs, MS Copilot, Own Goal Check Fraud
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
After seeing Wi-Fi network named “STINKY,” Navy found hidden Starlink dish on US warship
In early 2023, while in the US preparing for a deployment, Command Senior Chief Grisel Marrero—the enlisted shipboard leader—led a scheme to buy a Starlink for $2,800 and to install it inconspicuously on the ship's deck. The system was only for use by chiefs—not by officers or by most enlisted personnel—and a Navy investigation later revealed that at least 15 chiefs were in on the plan. The chiefs found that the Wi-Fi signal coming off the Starlink satellite transceiver couldn't cover the entire ship, so during a stop in Pearl Harbor, they bought "signal repeaters and cable" to extend coverage. This was all extremely risky, and the chiefs don't appear to have taken amazing security precautions once everything was installed. For one thing, they called the network "STINKY."
Back in 2022, the official Starlink FAQ said that the device's "network name will appear as 'STARLINK' or 'STINKY' in device WiFi settings.” In other words, not only was this asinine conspiracy a terrible OPSEC idea, but the ringleaders didn't even change the default Wi-Fi name until they started getting questions about it. Yikes.
All of the chiefs who used, paid for, or even knew about the system without disclosing it were given "administrative nonjudicial punishment at commodore’s mast," said Navy Times. Marrero herself was relieved of her post last year, and she pled guilty during a court-martial this spring.
Hacking blind spot: States struggle to vet coders of election software
When election officials in New Hampshire decided to replace the state’s aging voter registration database before the 2024 election, they knew that the smallest glitch in Election Day technology could become fodder for conspiracy theorists. So they turned to one of the best — and only — choices on the market: A small, Connecticut-based IT firm that was just getting into election software. But last fall, as the new company, WSD Digital, raced to complete the project, New Hampshire officials made an unsettling discovery: The firm had offshored part of the work. The revelation prompted the state to take a precaution that is rare among election officials: It hired a forensic firm to scour the technology for signs that hackers had hidden malware deep inside the coding supply chain.
The probe unearthed some unwelcome surprises.
The first of those risks stemmed from Microsoft software that had been misconfigured — probably by accident — to connect to servers in foreign countries, including Russia. The outbound traffic could have made it easier for hackers to identify and reconnoiter the system and slip past defenses deployed to protect it.
In addition, code for the database — which was not in use yet — included popular open-source software, core-js, that is overseen by a Russian national, Denis Pushkarev. Core-js included “callbacks” to Pushkarev’s personal website that could allow Pushkarev to pinpoint specific users of core-js. An op-ed suggested that Pushkarev’s criminal history and publicized financial struggles could make him susceptible to blackmail. Pushkarev called ReversingLabs’ warnings “stupid and unprofessional,” arguing that any effort to “inject anything malicious” into core-js would be noticed by its users.
The scan revealed another issue: A programmer had hard-coded the Ukrainian national anthem into the database, in an apparent gesture of solidarity with Kyiv. This was “a disaster averted,” said the person familiar with the probe, citing the risk that hackers could have exploited the first two issues to surreptitiously edit the state’s voter rolls, or use them and the presence of the Ukrainian national anthem to stoke election conspiracies.
NH officials said they opted not to cut ties with WSD because the company was transparent after they confronted it, and the scan revealed no signs that the system had been tampered with.
Leaked Disney Data Reveals Financial and Strategy Secrets
The trove of data from Disney that was leaked online by hackers earlier this summer includes a range of financial and strategy information that sheds light on the entertainment giant’s operations.
The leaked files include granular details about revenue generated by such products as Disney+ and ESPN+; park pricing offers the company has modeled; and what appear to be login credentials for some of Disney’s cloud infrastructure. Data that a hacking entity calling itself Nullbulge released online spans more than 44 million messages from Disney’s Slack workplace communications tool, upward of 18,800 spreadsheets and at least 13,000 PDFs, Some Slack channels in the cache contain detailed information about staff aboard the company’s cruises, including passport numbers, visa details, places of birth and physical addresses, as well as some current assignments. Another spreadsheet contained names, addresses and phone numbers for some Disney Cruise Line passengers.
The scope of the material taken appears to be limited to public and private channels within Disney’s Slack that one employee had access to. No private messages between executives appear to be included. Slack is only one online forum in which Disney employees communicate at work. Nullbulge claims to be a Russia-based hacktivist group that advocates for artist rights, but security researchers believe the hack is the work of a lone individual based in the U.S.
Security biz Verkada to pay $3M penalty under deal that also enforces infosec upgrade
You may remember the California outfit from a 2021 security incident that flowed from an admin-level username and password combo for its systems being left online. Hacktivists found those credentials and used them to access CCTV cameras, potentially as many as 150,000, installed in Tesla factories, Cloudflare offices, hospitals, and a prison, among other facilities. The incident saw US authorities file a complaint against Verkada, alleging numerous security failings within the business itself – including possible Health Insurance Portability and Accountability Act (HIPAA) violations and misrepresentations of other activities.
According to a proposed order agreed to the regulator and Verkada, the biz sent promotional emails without the option to unsubscribe, and without a physical address listed – in violation of America's Controlling the Assault of Non-Solicited Pornography and Marketing (CAN-SPAM) Act. That said, the biz will have to step up its security practices – including implementing a proper infosec program for the next 20 years, training staff in best practices at least once a year, implementing multi-factor authentication, and engaging a third party to check its systems.
Critical server-side vulnerability in Microsoft Copilot Studio gives illegal access to internal infrastructure
The flaw is attributed to improper handling of redirect status codes in user-configurable actions, which allows attackers to manipulate HTTP requests.
The Server-Side Request Forgery (SSRF) vulnerability identified in Copilot Studio stems from the manipulation of an application to make server-side HTTP requests to unintended targets or locations. This manipulation can lead to unauthorized access to internal resources that are typically protected. Essentially, an attacker could exploit this flaw to make requests on behalf of the application to sensitive internal resources, revealing potentially sensitive data. In the case of Copilot Studio, the SSRF vulnerability could have been exploited to access Microsoft’s Instance Metadata Service (IMDS). The IMDS is a common target for SSRF attacks in cloud environments because it can yield information such as managed identity access tokens. These tokens can then be used to gain further access to shared resources within the environment, including databases.
The Chase ATM 'glitch' that went viral is likely check fraud, bank says
Videos spread across TikTok showing people depositing checks for large amounts of money at ATMs and then making a withdrawal for a smaller but still substantial amount before the check cleared. Once they got the cash, they believed they had found a glitch in the system and were getting free money. It all might sound too good to be true, and it really is, because this process is just a form of check fraud, a criminal offense. Chase Bank, in a statement to USA Today said the issue has now been addressed. "Bank errors in your favor are almost never in your favor. In the case of this 'glitch,' it was just check fraud.”
HACKING
FBI busts musician’s elaborate AI-powered $10M streaming-royalty heist
While the AI-generated element of this story is novel, Smith allegedly broke the law by setting up an elaborate fake listener scheme. The US Attorney for the Southern District of New York, announced the charges, which include wire fraud and money laundering conspiracy. If convicted, Michael Smith, age 52, could face up to 20 years in prison for each charge.
Smith's scheme, which prosecutors say ran for seven years, involved creating thousands of fake streaming accounts using purchased email addresses. He developed software to play his AI-generated music on repeat from various computers, mimicking individual listeners from different locations. In an industry where success is measured by digital listens, Smith's fabricated catalog reportedly managed to rack up billions of streams.
By June 2019, Smith was earning about $110,000 monthly, sharing a portion with his co-conspirators. The NYT reports that in an email earlier this year, he boasted of reaching 4 billion streams and $12 million in royalties since 2019.
The Underground World of Black Market Chatbots Is Thriving
Illicit large language models (LLMs) can make up to $28,000 in two months from sales on underground markets. That’s just the tip of the iceberg, according to the study, which looked at more than 200 examples of malicious LLMs (or malas) listed on underground marketplaces between April and October 2023. The LLMs fall into two categories: those that are outright uncensored LLMs, often based on open-source standards, and those that jailbreak commercial LLMs out of their guardrails using prompts.
The malicious LLMs can be put to work in a variety of different ways, from writing phishing emails (a separate study estimates that LLMs can reduce the cost of producing such missives by 96%) to developing malware to attack websites.
Sextortion Scams Now Include Photos of Your Home
An old but persistent email scam known as “sextortion” has a new personalized touch: The missives, which claim that malware has captured web/phone-cam footage of recipients pleasuring themselves, now include a photo of the target’s home in a bid to make threats about publishing the videos more frightening and convincing. Following a salutation that includes the recipient’s full name, the start of the message reads, “Is visiting [recipient’s street address] a more convenient way to contact if you don’t take action. Nice location btw.”
How Hackers Bypass MFA, And What You Can Do About It
Mandiant fell victim to a scammer earlier this year. An attacker hacked into the company's X (formerly Twitter) account and used it to commit cryptographic fraud, scamming many users. It turns out that multifactor authentication is not a foolproof solution, even for cybersecurity companies, let alone regular users. Hackers can bypass MFA. In fact, there use many techniques that have proven successful. In short, stealing an authentication token is enough. Hackers can do this using an information-stealing Trojan to collect data from the victim's system.
Hackers can do this using an information-stealing Trojan to collect data from the victim's system. An example of such malware is Meduza Stealer. It extracts data from hundreds of browsers, MFA applications, crypto wallets and password managers.
Thanks to malware, attackers can also intercept emails, often obtaining one-time access codes for targeted accounts.
Intercepting notifications from an authentication app on the victim's smartphone works similarly. Spyware is installed on mobile devices to capture SMS messages containing MFA data.
Keyloggers, which are utilities installed on victims' devices to record keystrokes when entering logins and passwords, are also frequently used to hack accounts.
Here are some tips to avoid scams and stay safe while using MFA:
1. Only approve MFA notifications you are expecting. If you receive a push notification or SMS code that you did not initiate, do not approve it.
2. Use authenticator apps (like Google Authenticator) over SMS-based MFA, as they are more secure and less susceptible to interception.
3. Frequently check your account activity for any unauthorized logins or changes. Most services offer an option to view recent login activity.
4. Be aware of phishing techniques that trick you into providing MFA codes. Never click on links or follow instructions from unsolicited messages.
5. Turn on notifications for any changes to your account, such as password resets or MFA changes, so you can quickly respond to unauthorized actions.
6. Combine MFA with strong and unique passwords for each of your accounts.
7. Keep your operating systems, browsers and security software up to date to protect against malware that can steal authentication tokens.
8. Ensure all your devices are secure with passwords, PINs or biometric locks, and install reputable antivirus software to detect and prevent malware.
GitHub Actions Vulnerable to Typosquatting, Exposing Developers to Hidden Malicious Code
If developers make a typo in their GitHub Action that matches a typosquatter's action, applications could be made to run malicious code without the developer even realizing. The attack is possible because anyone can publish a GitHub Action by creating a GitHub account with a temporary email account. Given that actions run within the context of a user's repository, a malicious action could be exploited to tamper with the source code, steal secrets, and use it to deliver malware. All that the technique involves is for the attacker to create organizations and repositories with names that closely resemble popular or widely-used GitHub Actions. If a user makes inadvertent spelling errors when setting up a GitHub action for their project and that misspelled version has already been created by the adversary, then the user's workflow will run the malicious action as opposed to the intended one. In fact, a compromised action can even leverage your GitHub credentials to push malicious changes to other repositories within your organization, amplifying the damage across multiple projects. The actual problem is even more concerning because here we are only highlighting what happens in public repositories. The impact on private repositories, where the same typos could be leading to serious security breaches, remains unknown.
YubiKeys are vulnerable to cloning attacks thanks to newly discovered side channel
The YubiKey 5, the most widely used hardware token for two-factor authentication based on the FIDO standard, contains a cryptographic flaw that makes the finger-size device vulnerable to cloning when an attacker gains temporary physical access to it.
All YubiKeys running firmware prior to version 5.7—which was released in May and replaces the Infineon cryptolibrary with a custom one—are vulnerable. Updating key firmware on the YubiKey isn’t possible. The vulnerability exists for more than 14 years in Infineon top secure chips. These chips and the vulnerable part of the cryptographic library went through about 80 CC certification evaluations of level AVA VAN 4 (for TPMs) or AVA VAN 5 (for the others) from 2010 to 2024 (and a bit less than 30 certificate maintenances).
The exploitation of this vulnerability is demonstrated through realistic experiments and we show that an adversary only needs to have access to the device for a few minutes. The offline phase took us about 24 hours; with more engineering work in the attack development, it would take less than one hour. The attacks require about $11,000 worth of equipment and a sophisticated understanding of electrical and cryptographic engineering. The difficulty of the attack means it would likely be carried out only by nation-states or other entities with comparable resources.
SpyAgent Android malware steals your crypto recovery phrases from images
A new Android malware named SpyAgent uses optical character recognition (OCR) technology to steal cryptocurrency wallet recovery phrases from screenshots stored on the mobile device. A cryptocurrency recovery phrase, or seed phrase, is a series of 12-24 words that acts as a backup key for a cryptocurrency wallet. These phrases are used to restore access to your cryptocurrency wallet and all of its funds in the event you lose a device, data is corrupted, or you wish to transfer your wallet to a new device. These secret phrases are highly sought after by threat actors, as if they can gain access to it, they can use it to restore your wallet on their own devices and steal all of the funds stored within it.
White House thinks it's time to fix the insecure glue of the internet: Yup, BGP
BGP more or less glues the internet as we know it together. It's used to manage the routes your online traffic takes between the networks, known as autonomous systems or ASes, that together constitute the internet.
BGP does not check to see whether a remote network announcing a traffic path change has the authority to do so. Nor does it verify that messages exchanged between networks are authentic, or check whether routing announcements violate business policies between neighboring networks. Route hijacks can expose personal information; enable theft, extortion, and state-level espionage; disrupt security-critical transactions; and disrupt critical infrastructure operations," the report says. "While most BGP incidents are accidental, the concern over malicious actors has elevated this issue to a national security priority.
There is a cryptographic authentication scheme available to mitigate these risks: Resource Public Key Infrastructure (RPKI), which includes Route Origin Validation (ROV) and Route Origin Authorization (ROA). But this safety mechanism isn't foolproof, nor is it universally deployed. In Europe, according to the White House's roadmap, some 70 percent of BGP routes have published ROAs and are ROV-valid. Elsewhere, adoption is lower. In the US, it's only 39 percent, because the IP space overseen by the American Registry for Internet Numbers (ARIN) is larger and older than that of Europe or Asia, and because the US government itself lags the private sector in RPKI adoption.
APPSEC, DEVSECOPS, DEV
Security boom is over, with over a third of CISOs reporting flat or falling budgets
According to the fifth annual survey of CISOs carried out by security analyst house IANS Research, over a third of the 755 security bosses polled admitted they weren't hiring, although overall staffing growth rates were less than half of those seen in 2022.
The survey does note that overall security spending is still up 8 percent in 2024, although nowhere near the heady days of 2021 (16 percent growth) and 2022 (17 percent). This this slowdown is not to a general malaise but more to the fact that some sectors, notably manufacturing, had been playing catch-up on their security spending and were now up to speed.
An encouraging sign also is that security spending as a proportion of the overall IT budget is on the rise, up from 8.6 percent in 2020 to 13.2 percent this year. That trend looks set to continue, but still security spending was typically less than 1 percent of revenue.
The survey also showed signs that, at last, the C-suite execs are grokking the need for security spending. This is in part down to last year's SEC rule changes on reporting security incidents (The Reg's full guide on the topic is here) as well as concerns over corporate liability to lawsuits.
There's still a continuing talent shortage, so finding and retaining people is very challenging. Anecdotally, the biggest factor [in retention] ends up being opportunities for growth. If there's no way forward, people feel they are stagnating, especially after two to four years. It's a very special job that has levels of stress that exceed other roles.
90% of CISOs will see a budget increase next year. Cybersecurity budgets are, on average, just 5.7% of IT annual spending.
Tech sprawl is the silent killer of budget gains, Forrester warns. CISOs, on average, are seeing just over a third of their budgets come from software, doubling what they spend on hardware and also outpacing their personnel costs.
Cloud security, upgraded new security technology run on-premises, and security awareness/training initiatives are predicted to increase security budgets by 10% or more in 2025. Notably, 81% of security technology decision-makers predict their spending on cloud security will increase.
A staggering 91% of enterprises have fallen victim to software supply chain incidents in just a year, underscoring the need for better safeguards for continuous integration/deployment (CI/CD) pipelines. Open-source libraries, third-party development tools, and legacy APIs created years ago are just a few threat vectors that make software supply chains and APIs more vulnerable.
34% of enterprises that experienced a breach targeting IoT devices were more likely to report cumulative breach costs between $5 million and $10 million compared to organizations that experienced cyberattacks on non-IoT devices. The National Institute of Standards and Technology (NIST) provides NIST Special Publication 800-207, which is well-suited for securing IoT devices, given its focus on securing networks where traditional perimeter-based security isn’t scaling up to the challenge of protecting every endpoint.
New research by OPSWAT & F5 reveals critical cyber concerns
Over the past year,
35% of respondents reported experiencing a malware breach,
28% encountered credential theft or unauthorised account access, and
24% faced security compromises involving vendors, contractors, or other third parties.
DDoS attacks pose another significant threat, with only 25% of respondents feeling their organisations are fully prepared to respond.
Preparedness for other threats—such as Advanced Persistent Threats (APTs), botnets, API security issues, and zero-day malware—was reported to be even lower.
Only 27% of respondents regularly referenced the Open Web Application Security Project (OWASP) for best practices in web application security, compared to
53% who referred to National Institute of Standards and Technology (NIST) guidelines and
37% who followed guidelines from the Cybersecurity and Infrastructure Security Agency (CISA).
The research pointed to a perceived lack of leadership support as a critical concern among IT leaders. They reported feeling under-resourced, citing budget shortages, insufficient staff training, inadequate technical partnerships, and disparate security ecosystems as key factors. Additionally, they noted a general lack of attention from top management as a significant impediment to effective cybersecurity preparedness.
VENDORS & PLATFORMS
$2,000 per month for next-gen AI models? OpenAI could reportedly hike subscription prices amid bankruptcy claims: "That's a price point for an employee, not a chatbot. The only way it would make any sense is if it was legit AGI"
OpenAI potentially hiking the price of its subscription-based services comes after it recently hit a major milestone—surpassing 1 million paid business users.
As you may know, the startup charges $20 monthly for its ChatGPT Plus service, which gives users priority access to new features, custom GPTs, DALL-E 3 image generation technology, and more. OpenAI is reportedly leveraging a new technique dubbed post-training to develop new LLMs. It allows the models to fine-tune their performance and capabilities even after the training phase. It can be based on metrics like human feedback, rating the quality of its responses to queries.
Ironically, OpenAI, debatably the face of AI, is on the cusp of bankruptcy within the next 12 months. Projections indicate that it could incur losses amounting to $5 billion. However, Microsoft, NVIDIA, and Apple will reportedly extend the ChatGPT maker's lifeline with another round of funding. This cash infusion could push OpenAI's market valuation to $100 billion. OpenAI spends up to $700,000 per day to keep ChatGPT running. This cost doesn't include AI's high electricity and cooling water demand.
Copilot for Microsoft 365 might boost productivity if you survive the compliance minefield
Copilot for Microsoft 365 is an add-on for which Microsoft expects $30 per user per month, with an annual subscription. It uses large language models (LLMs) and integrates data with Microsoft Graph and Microsoft 365 apps and services to summarize, predict, and generate content. The much-touted productivity gains from the AI service need to be balanced by its risks – even Microsoft notes "users should always take caution and use their best judgment when using outputs from Copilot for Microsoft 365" – and worries over compliance and data governance must be addressed before unleashing the service on an organization.
Microsoft has published a Transparency Note for Copilot for Microsoft 365, warning enterprises to ensure user access rights are correctly managed before rolling out the technology. In addition to ensuring user access is configured correctly, the Transparency Note warns organizations to consider legal and compliance issues when using the service, particularly in regulated industries. "Now, maybe if you set up a totally clean Microsoft environment from day one, that would be alleviated. But nobody has that. People have implemented these systems over time, particularly really big companies. And you get these conflicting authorizations or conflicting access to data."
Amazon congratulates itself for AI code that mostly works
Amazon Web Services took a moment to pat itself on the back for being thought of inside the box, specifically, the upper right-hand square that's part of Gartner's trademarked Magic Quadrant. This particular set of boxes maps the IT consultancy's view of AI code assistants. AWS is understandably chuffed to land a spot in the "leaders" quadrangle for its Q Developer service, alongside GitHub Copilot, GitLab Duo, and Google Cloud's Gemini Code Assist.
Amazon claims that Q Developer's agents for code transformation helped Amazon migrate 30,000 production applications from Java 8 to Java 17, saving over 4,500 years of development work, in addition to the $260 million in performance improvements.
They did not mention how often AI code suggestions need correction or the costs associated with these fixes. Last year, researchers evaluated ChatGPT, GitHub Copilot, and Amazon CodeWhisperer (now Q Developer), and found that the AI helpers generated correct code 65.2 percent, 46.3 percent, and 31.1 percent of the time, respectively. BT Group reporting they accepted 37 percent of Q's code suggestions and National Australia Bank reporting a 50 percent acceptance rate."
Carnegie Mellon computer science professors Eunsuk Kang and Mary Shaw conclude: "Generative AI is now eagerly inflating our aspirations, but its capability is not yet trustworthy and robust enough to be part of the stable core of [software engineering] methods. AI is already demonstrably useful under careful supervision, and we can expect its utility for routine programming tasks to improve quickly."
New Version of the Bluetooth Core Specification
Bluetooth Core Specification version 6.0 includes new features and several feature enhancements, including Bluetooth Channel Sounding, decision-based advertising filtering, monitoring advertisers, an enhancement to the Isochronous Adaptation Layer (ISOAL), the LL extended feature set, and a frame space update.
Bluetooth Channel Sounding brings true distance awareness, introducing transformative benefits across various applications. The user experience of Find My solutions can be greatly improved, making it easier and faster to locate lost items.
Decision-Based Advertising Filtering: The Bluetooth Low Energy (LE) Extended Advertising feature supports a series of related packets being transmitted on both primary and secondary radio channels.
Monitoring Advertisers: The host component of an observer device may instruct the Bluetooth LE controller to filter duplicate advertising packets. When filtering of this type is active, the host will only receive a single advertising packet from each unique device.
ISOAL Enhancement: The Isochronous Adaptation Layer (ISOAL) makes it possible for larger data frames to be transmitted in smaller link-layer packets and ensures the associated timing information that is needed for the correct processing of the data by receivers can be reconstituted.
LL Extended Feature Set: With this advancement, devices can exchange information about the link-layer features that they each support.
Frame Space Update: Prior versions of the Bluetooth Core Specification defined a constant value for the time that separates adjacent transmissions of packets in a connection event or connected isochronous stream (CIS) subevent.
Kaspersky offloads U.S. antivirus customers to Pango Group
Cybersecurity company Pango is acquiring all of Kaspersky Lab's U.S. antivirus customers following the Commerce Department's ban on sales of the Russian antivirus software. Pango is acquiring roughly 1 million new users through the deal.
Pango owns and offers a portfolio of cybersecurity products, including VPNs, antivirus software and identity theft protection tools.
Kaspersky customers will transition to Pango Group's antivirus brand, Ultra AV.
Elastic’s return to open source
After Elastic changed its license in 2021, AWS could no longer copy-and-paste Elasticsearch as one of its service offerings. Eventually, AWS opted to fork Elasticsearch and develop a rival product, OpenSearch, which has seen growing success. Now that “Amazon is fully invested in their fork” and “the market confusion [around Elastic’s Elasticsearch trademarks] has been (mostly) resolved,” Elastic felt it no longer had to worry about AWS selling Elasticsearch as its own branded product. This has resulted in a “partnership with AWS [that] is stronger than ever.” This resonates with my own experience at MongoDB: AWS can be a fantastic partner. It just sometimes needs help getting there.
LEGAL & REGULATORY
US, Britain, EU to sign first international AI treaty
The first legally binding international AI treaty will be open for signing on Thursday by the countries that negotiated it, including European Union members, the United States and Britain.
The Framework Convention covers the use of AI systems by public authorities – including private actors acting on their behalf – and private actors. The Convention offers Parties two modalities to comply with its principles and obligations when regulating the private sector: Parties may opt to be directly obliged by the relevant Convention provisions or, as an alternative, take other measures to comply with the treaty’s provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law. Parties to the Framework Convention are not required to apply the provisions of the treaty to activities related to the protection of their national security interests but must ensure that such activities respect international law and democratic institutions and processes. The Framework Convention does not apply to national defence matters nor to research and development activities, except when the testing of AI systems may have the potential to interfere with human rights, democracy, or the rule of law.
The AI Convention mainly focuses on the protection of human rights of people affected by AI systems and is separate from the EU AI Act, which entered into force last month.
Data watchdog fines Clearview AI $33M for 'illegal' data collection
The problem, as far as the Dutch DPA is concerned, is that people in the images scraped by Clearview AI are not aware of the process, and have not given consent. The Dutch DPA has ordered Clearview to stop those violations. If Clearview fails to do this, the company will have to pay penalties for non-compliance in a total maximum amount of 5.1 million euros on top of the fine.
In response, Jack Mulcaire, Chief Legal Officer at Clearview AI, said: "Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR. This decision is unlawful, devoid of due process, and is unenforceable."
The Dutch DPA said it was "looking for ways to make sure that Clearview stops the violations. Among other things, by investigating if the directors of the company can be held personally responsible for the violations. We are now going to investigate if we can hold the management of the company personally liable and fine them for directing those violations. That liability already exists if directors know that the GDPR is being violated, have the authority to stop that, but omit to do so, and in this way consciously accept those violations." It sounds like any planned European vacation for Clearview execs would need to be put on ice for the time being.
US charges Russian military officers for unleashing wiper malware on Ukraine
The indictment, filed in US District Court for the District of Maryland, said that five of the men were officers in Unit 29155 of the Russian Main Intelligence Directorate (GRU), a military intelligence agency of the General Staff of the Armed Forces. Along with a sixth defendant, prosecutors alleged, they engaged in a conspiracy to hack, exfiltrate data, leak information, and destroy computer systems associated with the Ukrainian government in advance of the Russian invasion of Ukraine in February 2022. According to court documents, on Jan. 13, 2022, the defendants conspired to use a U.S.-based company’s services to distribute malware known in the cybersecurity community as “WhisperGate,” which was designed to look like ransomware, to dozens of Ukrainian government entities’ computer systems.
And Now For Something Completely Different …
How a Group of Teenagers Pranked 'One Million Checkboxes'
Nolen Royalty launched his short-lived viral site "One Million Checkboxes" in June. (Any visitor could check or uncheck a box in the grid — which would change how it displayed for every other visitor to the site, in near real-time.) "Within days there were half a million people on the site. He'd stored the state of his one million checkboxes in a million-bit database — 125 and got a surprise when looking at the raw bytes (converted into their value in the 256-character ASCII table)... they spelled out a URL. Had someone hacked into his database? No, the answer was even stranger. “Somebody was writing me a message in binary.”
The URL led to a Discord channel, where he found himself talking to the orchestrators of the elaborate prank. And it was then that they asked him: "Have you seen your checkboxes as a 1,000 x 1,000 image yet?" It turns out they'd also input two alternate versions of the same message — one in base64, and one drawn out as a fully-functional QR code. (And some drawings....)
[rG: Cool video for Devs]