Friday, April 4, 2025

SpotBugs Access Token Theft Identified as Root Cause of GitHub Supply Chain Attack

Apr 04, 2025Ravie LakshmananVulnerability / Open Source,

The cascading supply chain attack that initially targeted Coinbase before becoming more widespread to single out users of the "tj-actions/changed-files" GitHub Action has been traced further back to the theft of a personal access token (PAT) related to SpotBugs.

"The attackers obtained initial access by taking advantage of the GitHub Actions workflow of SpotBugs, a popular open-source tool for static analysis of bugs in code," Palo Alto Networks Unit 42 said in an update this week. "This enabled the attackers to move laterally between SpotBugs repositories, until obtaining access to reviewdog."

There is evidence to suggest that the malicious activity began as far back as November, 2024, although the attack against Coinbase did not take place until March 2025.

Unit 42 said its investigation began with the knowledge that reviewdog's GitHub Action was compromised due to a leaked PAT associated with the project's maintainer, which subsequently enabled the threat actors to push a rogue version of "reviewdog/action-setup" that, in turn, was picked up by "tj-actions/changed-files" due to it being listed as a dependency via the "tj-actions/eslint-changed-files" action.

It has since been uncovered that the maintainer was also an active participant in another open-source project called SpotBugs.

The attackers are said to have pushed a malicious GitHub Actions workflow file to the "spotbugs/spotbugs" repository under the disposable username "jurkaofavak," causing the maintainer's PAT to be leaked when the workflow was executed.

It's believed that the same PAT facilitated access to both "spotbugs/spotbugs" and "reviewdog/action-setup," meaning the leaked PAT could be abused to poison "reviewdog/action-setup."

"The attacker somehow had an account with write permission in spotbugs/spotbugs, which they were able to use to push a branch to the repository and access the CI secrets," Unit 42 said.

As for how the write permissions were obtained, it has come to light that the user behind the malicious commit to SpotBugs, "jurkaofavak," was invited to the repository as a member by one of the project maintainers themselves on March 11, 2025.

In other words, the attackers managed to obtain the PAT of the SpotBugs repository to invite "jurkaofavak" to become a member. This, the cybersecurity company said, was carried out by creating a fork of the "spotbugs/sonar-findbugs" repository and creating a pull request under the username "randolzfow."

"On 2024-11-28T09:45:13 UTC, [the SpotBugs maintainer] modified one of the 'spotbugs/sonar-findbugs workflows to use their own PAT, as they were having technical difficulties in a part of their CI/CD process," Unit 42 explained.

"On 2024-12-06 02:39:00 UTC, the attacker submitted a malicious pull request to spotbugs/sonar-findbugs, which exploited a GitHub Actions workflow that used the pull_request_target trigger."

The "pull_request_target" trigger is a GitHub Actions workflow trigger that allows workflows running from forks to access secrets – in this case, the PAT – leading to what's called a poisoned pipeline execution attack (PPE).

The SpotBugs maintainer has since confirmed that the PAT that was used as a secret in the workflow was the same access token that was later used to invite "jurkaofavak" to the "spotbugs/spotbugs" repository. The maintainer has also rotated all of their tokens and PATs to revoke and prevent further access by the attackers.

One major unknown in all this is the three-month gap between when the attackers leaked the SpotBugs maintainer's PAT and when they abused it. It's suspected that the attackers were keeping an eye out on the projects that were dependent on "tj-actions/changed-files" and waited to strike a high-value target like Coinbase.

"Having invested months of effort and after achieving so much, why did the attackers print the secrets to logs, and in doing so, also reveal their attack?," Unit 42 researchers pondered.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/bFsiQ7T
via IFTTT

The Good, the Bad and the Ugly in Cybersecurity – Week 14

The Good | DoJ Seize $8.2 Million in Cryptocurrency Linked to Romance Baiting Schemes

The DoJ has seized over $8.2 million in USDT (Tether) cryptocurrency, all stolen through ‘romance baiting’ scams, formerly known as ‘pig butchering’. Victims are manipulated into making investments on fake websites or apps after being promised substantial returns. They are then led to believe they are making profits so invest even more, only to run into a multitude of issues when trying to make any withdrawals. At this point, the scammers have already made off with all of the victim’s funds.

The FBI backward-traced the laundering patterns across multiple platforms and networks starting from centralized exchanges, then Ethereum and TRON, through DeFi protocols, and finally, into storage wallets owned by the fraudsters. Analysis on the investigation noted that U.S. state investigators were able to file a dual legal forfeiture under charges of wire fraud and money laundering.

Tether Limited cooperated by freezing the stolen funds, burning the original tokens, and reissuing them to law enforcement-controlled wallets. The asset seizure was successfully completed in November 2024, and now enables potential restitution for victims as the FBI continues to trace affected accounts.

Source: TRM Labs

One filed complaint identifies 38 cryptocurrency accounts victimized by these scams, with total losses exceeding $5.2 million. Five named victims from Ohio, Michigan, California, Utah, and North Carolina collectively lost over $1.6 million, with the worst case involving someone liquidating their retirement account, leading to a $650,000 loss. These major seizures showcase the DOJ’s commitment to dismantling sophisticated cryptocurrency scams and ensuring justice for victims.

The Bad | China-linked Threat Actor Exploits Ivanti Bug CVE-2025-22457

A critical vulnerability in several Ivanti products is being actively exploited in the wild, with attackers using it to gain remote access, deploy malware, and establish long-term persistence in victim environments.

Tracked as CVE-2025-22457, the flaw is a stack-based buffer overflow vulnerability with a CVSS score of 9.0, affecting Ivanti’s Connect Secure, Policy Secure, and Neurons for ZTA Gateways. The bug allows remote, unauthenticated attackers to execute arbitrary code on vulnerable systems.

Ivanti says the issue stems from how the gateway handles certain limited character inputs—specifically, periods and numbers. While the flaw was initially assessed as not meeting the criteria for remote code execution or denial of service, subsequent analysis revealed that it could be exploited through sophisticated methods and evidence of active exploitation in the wild has since emerged.

A campaign abusing the vulnerability was observed in mid-March 2025 and attributed to a China-linked threat actor tracked as UNC5221. The attackers employed TRAILBLAZE, an in-memory dropper, to deliver the initial payload. Once access is established, BRUSHFIRE, a memory-resident backdoor, is injected into the web process of the Ivanti appliance, enabling remote command execution without writing to disk. To maintain persistence and expand access, the attackers leveraged a malware framework known as SPAWN, with capabilities for log tampering, memory extraction, and other post-exploitation activity.

Vulnerable edge devices attract threat actors like honey. UNC5221 (China-linked Threat Actor) has been actively exploiting CVE-2025-22457 (CVSS 9.0) a critical Ivanti VPN vulnerability since mid-March 2025. Patch version 22.7R2.6
#CVE #Exploited #POC #patch #vulnerability

[image or embed]

— Mario Rojas (@mariorojaschin.bsky.social) 3 April 2025 at 17:35

To mitigate the threat, Ivanti released a patch for Connect Secure 22.7R2.6 on February 11, 2025. Fixes for Policy Secure (22.7R1.4) and Neurons for ZTA Gateway (22.8R2.2) are scheduled for release later this month. Ivanti says that Policy Secure should not be exposed to the public internet and that Neurons for ZTA Gateway is not exploitable in production deployments.

Ivanti strongly encourages its customers to update to Connect Secure 22.7R2.6 as soon as possible. The company’s full mitigation advice can be found here.

The Ugly | New WRECKSTEEL Malware Used in Cyber Espionage Campaigns Targeting Ukraine

Ukraine’s computer emergency response team, CERT-UA, has reported at least three cyberattacks from March focused on targeting government agencies and critical infrastructure using a new malware dubbed ‘WRECKSTEEL’. This is a targeted espionage campaign now attributed to the threat cluster tracked as UAC-0219 that has been active since fall of 2024 and is known for employing phishing emails to spread malicious links.


Attackers compromised the government accounts to send messages with links to public file-sharing services such as Google Drive and DropMeFiles. In one phishing attempt, the attackers masqueraded as a Ukrainian government agency, sending out fake salary reduction notices to lure employees into clicking malicious links. Clicking the links triggered the download of a Visual Basic Script (VBS) loader, which then executed a PowerShell script designed to collect documents, images, PDFs, and presentations with specific extensions to capture screenshots.

While early versions of UAC-0219 relied on EXE files, VBS stealers, and the IrfanView image editor for data extraction, the integration of the screenshot functionality directly into the PowerShell script to perform espionage confirms the continued development of the malware.

While CERT-UA did not directly attribute the attacks to a nation, similar phishing campaigns have historically originated from Russia. The attacks reflect increasing sophistication, using PowerShell-based techniques to bypass detection and collect targeted data. CERT-UA released a full list of IoCs to help organizations detect and mitigate these threats.



from SentinelOne https://ift.tt/XBk09gH
via IFTTT

Have We Reached a Distroless Tipping Point?

There's a virtuous cycle in technology that pushes the boundaries of what's being built and how it's being used. A new technology development emerges and captures the world's attention. People start experimenting and discover novel applications, use cases, and approaches to maximize the innovation's potential. These use cases generate significant value, fueling demand for the next iteration of the innovation, and in turn, a new wave of innovators create the next generation of use cases, driving further advancements.

Containerization has become the foundation of modern, cloud-native software development, supporting new use cases and approaches to building resilient, scalable, and portable applications. It also holds the keys to the next software delivery innovation, simultaneously necessitating the evolution to secure-by-design, continuously-updated software and serving as the means to get there.

Below, I'll talk through some of the innovations that led to our containerized revolution, as well as some of the traits of cloud-native software development that have led to this inflection point – one that has primed the world to move away from traditional Linux distros and towards a new approach to open source software delivery.

Iteration has moved us closer to ubiquity

There have been many innovations that have paved the way for more secure, performant open source delivery. In the interest of your time and my word count I'll call out three particular milestones. Each step, from Linux Containers (LXC) to Docker and ultimately the Open Container Initiative (OCI), built upon its predecessor, addressing limitations and unlocking new possibilities.

LXC laid the groundwork by harnessing the Linux kernel's capabilities (namely cgroups and namespaces), to create lightweight, isolated environments. For the first time, developers could package applications with their dependencies, offering a degree of consistency across different systems. However, LXC's complexity for users and its lack of a standardized image distribution catalog hindered widespread adoption.

Docker emerged as a game-changer, democratizing container technology. It simplified the process of creating, running, and sharing containers, making them accessible to a broader audience. Docker's user-friendly interface and the creation of Docker Hub, a central repository for container images, fostered a vibrant ecosystem. This ease of use fueled rapid adoption, but also raised concerns about vendor lock-in and the need for interoperability.

Recognizing the potential for fragmentation, the OCI (Open Containers Initiative) stepped in to standardize container formats and runtimes. By defining open specifications, the OCI ensured that containers could be built and run across different platforms, fostering a healthy, competitive landscape. Projects like runC and containerd, born from the OCI, provided a common foundation for container runtimes and enabled greater portability and interoperability.

The OCI standards also enabled Kubernetes (another vendor-neutral standard) to become a truly portable platform, capable of running on a wide range of infrastructure and allowing organizations to orchestrate their applications consistently across different cloud providers and on-premises environments. This standardization and its associated innovations unlocked the full potential of containers, paving the way for their ubiquitous presence in modern software development.

[Containerized] software is eating the world

The advancements in Linux, the rapid democratization of containers through Docker, and the standardization of OCI were all propelled by necessity, with the evolution of cloud-native app use cases pushing orchestration and standardization forward. Those cloud-native application characteristics also highlight why a general-purpose approach to Linux distros no longer serves software developers with the most secure, updated foundations to develop on:

Microservice-oriented architecture: Cloud-native applications are typically built as a collection of small, independent services, with each microservice performing a specific function. Each of these microservices can be built, deployed, and maintained independently, which provides a tremendous amount of flexibility and resiliency. Because each microservice is independent, software builders don't require comprehensive software packages to run a microservice, relying only on the bare essentials within a container.

Resource-conscious and efficient: Cloud-native applications are built to be efficient and resource-conscious to minimize loads on infrastructure. This stripped down approach naturally aligns well with containers and an ephemeral deployment strategy, with new containers being deployed constantly and other workloads being updated to the latest code available. This cuts down security risks by taking advantage of the newest software packages, rather than waiting for distro patches and backports.

Portability: Cloud-native applications are designed to be portable, with consistent performance and reliability regardless of where the application is running. As a result of containers standardizing the environment, developers can move beyond the age-old "it worked fine on my machine" headaches of the past.

The virtuous cycle of innovation driving new use cases and ultimately new innovations is clear when it comes to containerization and the widespread adoption of cloud-native applications. Critically, this inflection point of innovation and use case demands has driven an incredible rate of change within open source software — we've reached a point where the security, performance, and innovation drawbacks of traditional "frozen-in-time" Linux distros outweigh the familiarity and perceived stability of the last generation of software delivery.

So what should the next generation of open source software delivery look like?

Enter: Chainguard OS

To meet modern security, performance, and productivity expectations, software builders need the latest software in the smallest form designed for their use case, without any of the CVEs that lead to risk for the business (and a list of "fix-its" from the security teams). Making good on those parameters requires more than just making over the past. Instead, the next generation of open source software delivery needs to start from the source of secure, updated software: the upstream maintainers.

That's why Chainguard built this new distroless approach, continuously rebuilding software packages based not on downstream distros but on the upstream sources that are removing vulnerabilities and adding performance improvements. We call it Chainguard OS.

Chainguard OS serves as the foundation for the broad security, efficiency, and productivity outcomes that Chainguard products deliver today, "Chainguarding" a rapidly growing catalog of over 1,000 container images.

Chainguard OS adheres to four key principles to make that possible:

  • Continuous Integration and Delivery: Emphasizes the continuous integration, testing, and release of upstream software packages, ensuring a streamlined and efficient development pipeline through automation.
  • Nano Updates and Rebuilds: Favors non-stop incremental updates and rebuilds over major release upgrades, ensuring smoother transitions and minimizing disruptive changes.
  • Minimal, Hardened, Immutable Artifacts: Strips away unnecessary vendor bloat from software artifacts, making sidecar packages and extras optional to the user while enhancing security through hardening measures.
  • Delta Minimization: Keeps deviations from upstream to a minimum, incorporating extra patches only when essential and only for as long as necessary until a new release is cut from upstream.

Perhaps the best way to highlight the value of Chainguard OS's principles is to see the impact in Chainguard Images.

In the below screenshot (and viewable here), you can see a side-by-side comparison between an external <python:latest> and <cgr.dev/chainguard/python:latest> Chainguard Image.

Aside from the very clear discrepancy in the vulnerability count, it's worth examining the size difference between the two container images. The Chainguard image comprises just 6% of the open source alternative image.

Along with the minimized image size, the Chainguard image was last updated just an hour prior to the screengrab, something that happens daily:

A quick scan of the provenance and SBOM data illustrates the end-to-end integrity and immutability of the artifacts — a kind of complete nutrition label that underscores the security and transparency that a modern approach to open source software delivery can provide.

Each Chainguard image stands as a practical example of the value Chainguard OS provides, offering a stark alternative to what has come before it. Perhaps the greatest indicator is the feedback we've received from customers, who have shared how Chainguard's container images have helped eliminate CVEs, secure their supply chains, achieve and maintain compliance, and reduce developer toil, enabling them to re-allocate precious developer resources.

Our belief is that Chainguard OS's principles and approach can be applied to a variety of use cases, extending the benefits of continuously rebuilt-from-source software packages to even more of the open source ecosystem.

If you found this useful, be sure to check out our whitepaper on this subject or contact our team to talk to an expert on Chainguard's distroless approach.

Note: This article is expertly written and contributed by Dustin Kirkland — VP of Engineering at Chainguard.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/w0cQSsB
via IFTTT

Thursday, April 3, 2025

Microsoft Warns of Tax-Themed Email Attacks Using PDFs and QR Codes to Deliver Malware

Microsoft is warning of several phishing campaigns that are leveraging tax-related themes to deploy malware and steal credentials.

"These campaigns notably use redirection methods such as URL shorteners and QR codes contained in malicious attachments and abuse legitimate services like file-hosting services and business profile pages to avoid detection," Microsoft said in a report shared with The Hacker News.

A notable aspect of these campaigns is that they lead to phishing pages that are delivered via a phishing-as-a-service (PhaaS) platform codenamed RaccoonO365, an e-crime platform that first came to light in early December 2024.

Also delivered are remote access trojans (RATs) like Remcos RAT, as well as other malware and post-exploitation frameworks such as Latrodectus, AHKBot, GuLoader, and BruteRatel C4 (BRc4).

One such campaign spotted by the tech giant on February 6, 2025, is estimated to have sent hundreds of emails targeting the United States ahead of the tax filing season that attempted to deliver BRc4 and Latrodectus. The activity has been attributed to Storm-0249, an initial access broker previously known for distributing BazaLoader, IcedID, Bumblebee, and Emotet.

The attacks involve the use of PDF attachments containing a link that redirects users to a URL shortened via Rebrandly, ultimately leading them to a fake Docusign page with an option to view or download the document.

"When users clicked the Download button on the landing page, the outcome depended on whether their system and IP address were allowed to access the next stage based on filtering rules set up by the threat actor," Microsoft said.

If access is allowed, the user is sent a JavaScript file that subsequently downloads a Microsoft Software Installer (MSI) for BRc4, which serves as a conduit for deploying Latrodectus. If the victim is not deemed a valuable enough target, they are sent a benign PDF document from royalegroupnyc[.]com.

Microsoft said it also detected a second campaign between February 12 and 28, 2025, where tax-themed phishing emails were sent to more than 2,300 organizations in the U.S., particularly aimed at engineering, IT, and consulting sectors.

The emails, in this case, had no content in the message body, but featured a PDF attachment containing a QR code that pointed to a link associated with the RaccoonO365 PhaaS that mimics Microsoft 365 login pages to trick users into entering their credentials.

In a sign that these campaigns come in various forms, tax-themed phishing emails have also been flagged as propagating other malware families like AHKBot and GuLoader.

AHKBot infection chains have been found to direct users to sites hosting a malicious Microsoft Excel file that, upon opening and enabling macros, downloads and runs a MSI file in order to launch an AutoHotKey script, which then downloads a Screenshotter module to capture screenshots from the compromised host and exfiltrate them to a remote server.

The GuLoader campaign aims to deceive users into clicking on a URL present within a PDF email attachment, resulting in the download of a ZIP file.

"The ZIP file contained various .lnk files set up to mimic tax documents. If launched by the user, the .lnk file uses PowerShell to download a PDF and a .bat file," Microsoft said. "The .bat file in turn downloaded the GuLoader executable, which then installed Remcos."

The development comes weeks after Microsoft warned of another Storm-0249 campaign that redirected users to fake websites advertising Windows 11 Pro to deliver an updated version of Latrodectus loader malware via the BruteRatel red-teaming tool.

"The threat actor likely used Facebook to drive traffic to the fake Windows 11 Pro download pages, as we observed Facebook referrer URLs in multiple cases," Microsoft said in a series of posts on X.

"Latrodectus 1.9, the malware's latest evolution first observed in February 2025, reintroduced the scheduled task for persistence and added command 23, enabling the execution of Windows commands via 'cmd.exe /c .'"

The disclosure also follows a surge in campaigns that use QR codes in phishing documents to disguise malicious URLs as part of widespread attacks aimed at Europe and the U.S., resulting in credential theft.

"Analysis of the URLs extracted from the QR codes in these campaigns reveals that attackers typically avoid including URLs that directly point to the phishing domain," Palo Alto Networks Unit 42 said in a report. "Instead, they often use URL redirection mechanisms or exploit open redirects on legitimate websites."

These findings also come in the wake of several phishing and social engineering campaigns that have been flagged in recent weeks -

  • Use of the browser-in-the-browser (BitB) technique to serve seemingly realistic browser pop-ups that trick players of Counter-Strike 2 into entering their Steam credentials with the likely goal of reselling access to these accounts for profit
  • Use of information stealer malware to hijack MailChimp accounts, permitting threat actors to send email messages in bulk
  • Use of SVG files to bypass spam filters and redirect users to fake Microsoft login pages
  • Use of trusted collaboration services like Adobe, DocuSign, Dropbox, Canva, and Zoho to sidestep secure email gateways (SEGs) and steal credentials
  • Use of emails spoofing music streaming services like Spotify and Apple Music with the goal of harvesting credentials and payment information
  • Use of fake security warnings related to suspicious activity on Windows and Apple Mac devices on bogus websites to deceive users into providing their system credentials
  • Use of fake websites distributing trojanized Windows installers for DeepSeek, i4Tools, and Youdao Dictionary Desktop Edition that drop Gh0st RAT
  • Use of billing-themed phishing emails targeting Spanish companies to distribute an information stealer named DarkCloud

To mitigate the risks posed by these attacks, it's essential that organizations adopt phishing-resistant authentication methods for users, use browsers that can block malicious websites, and enable network protection to prevent applications or users from accessing malicious domains.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/n8EGKuJ
via IFTTT

Suspected China-Nexus Threat Actor Actively Exploiting Critical Ivanti Connect Secure Vulnerability (CVE-2025-22457)

Written by: John Wolfram, Michael Edie, Jacob Thompson, Matt Lin, Josh Murchie


On Thursday, April 3, 2025, Ivanti disclosed a critical security vulnerability, CVE-2025-22457, impacting Ivanti Connect Secure (“ICS”) VPN appliances version 22.7R2.5 and earlier. CVE-2025-22457 is a buffer overflow vulnerability, and successful exploitation would result in remote code execution. Mandiant and Ivanti have identified evidence of active exploitation in the wild against ICS 9.X (end of life) and 22.7R2.5 and earlier versions. Ivanti and Mandiant encourage all customers to upgrade as soon as possible. 

The earliest evidence of observed CVE-2025-22457 exploitation occurred in mid-March 2025. Following successful exploitation, we observed the deployment of two newly identified malware families, the TRAILBLAZE in-memory only dropper and the BRUSHFIRE passive backdoor. Additionally, deployment of the previously reported SPAWN ecosystem of malware attributed to UNC5221 was also observed. UNC5221 is a suspected China-nexus espionage actor that we previously observed conducting zero-day exploitation of edge devices dating back to 2023.

A patch for CVE-2025-22457 was released in ICS 22.7R2.6 on February 11, 2025. The vulnerability is a buffer overflow with a limited character space, and therefore it was initially believed to be a low-risk denial-of-service vulnerability. We assess it is likely the threat actor studied the patch for the vulnerability in ICS 22.7R2.6 and uncovered through a complicated process, it was possible to exploit 22.7R2.5 and earlier to achieve remote code execution.

Ivanti released patches for the exploited vulnerability and Ivanti customers are urged to follow the actions in the Security Advisory to secure their systems as soon as possible.

Post-Exploitation TTPs

Following successful exploitation, Mandiant observed the deployment of two newly identified malware families tracked as TRAILBLAZE and BRUSHFIRE through a shell script dropper. Mandiant has also observed the deployment of the SPAWN ecosystem of malware, as well as a modified version of the Integrity Checker Tool (ICT) as a means of evading detection.  

Shell-script Dropper

Following successful exploitation of CVE-2025-22457, Mandiant observed a shell script being leveraged that executes the TRAILBLAZE dropper. This dropper injects the BRUSHFIRE passive backdoor into a running /home/bin/web process. The first stage begins by searching for a /home/bin/web process that is a child process of another /home/bin/web process (the point of this appears to be to inject into the web process that is actually listening for connections). It then creates the the following files and associated content:

  • /tmp/.p: contains the PID of the /home/bin/web process.

  • /tmp/.m: contains a memory map of that process (human-readable).

  • /tmp/.w: contains the base address of the web binary from that process

  • /tmp/.s: contains the base address of libssl.so from that process

  • /tmp/.r: contains the BRUSHFIRE passive backdoor

  • /tmp/.i: contains the TRAILBLAZE dropper

The shell script then executes /tmp/.i, which is the second stage in-memory only dropper tracked as TRAILBLAZE. It then deletes all of the temporary files previously created (except for /tmp/.p), as well as the contents of the /data/var/cores directory. Next, all child processes of the /home/bin/web process are killed and the /tmp/.p file is deleted. All of this behavior is non-persistent, and the dropper will need to be re-executed if the system or process is rebooted.

TRAILBLAZE

TRAILBLAZE is an in-memory only dropper written in bare C that uses raw syscalls and is designed to be as minimal as possible, likely to ensure it can fit within the shell script as Base64. TRAILBLAZE injects a hook into the identified /home/bin/web process. It will then inject the BRUSHFIRE passive backdoor into a code cave inside that process.

BRUSHFIRE

BRUSHFIRE is a passive backdoor written in bare C that acts as an SSL_read hook. It first executes the original SSL_read function, and checks to see if the returned data begins with a specific string. If the data begins with the string, it will XOR decrypt then execute shellcode contained in the data. If the received shellcode returns a value, the backdoor will call SSL_write to send the value back.

SPAWNSLOTH

As detailed in our previous blog post, SPAWNSLOTH acts as a log tampering component tied to the SPAWNSNAIL backdoor. It targets the dslogserver process to disable both local logging and remote syslog forwarding.

SPAWNSNARE

SPAWNSNARE is a utility that is written in C and targets Linux. It can be used to extract the uncompressed linux kernel image (vmlinux) into a file and encrypt it using AES without the need for any command line tools.

SPAWNWAVE

SPAWNWAVE is an evolved version of SPAWNANT that combines capabilities from other members of the SPAWN* malware ecosystem. SPAWNWAVE overlaps with the publicly reported SPAWNCHIMERA and RESURGE malware families.

Attribution

Google Threat Intelligence Group (GTIG) attributes the exploitation of CVE-2025-22457 and the subsequent deployment of the SPAWN ecosystem of malware to the suspected China-nexus espionage actor UNC5221. GTIG has previously reported UNC5221 conducting zero-day exploitation of CVE-2025-0282, as well as the exploitation CVE-2023-46805 and CVE-2024-21887. 

Furthermore, GTIG has also previously observed UNC5221 conducting zero-day exploitation of CVE-2023-4966, impacting NetScaler ADC and NetScaler Gateway appliances. UNC5221 has targeted a wide range of countries and verticals during their operations, and has leveraged an extensive set of tooling, spanning passive backdoors to trojanized legitimate components on various edge appliances. 

GTIG assesses that UNC5221 will continue pursuing zero-day exploitation of edge devices based on their consistent history of success and aggressive operational tempo. Additionally, as noted in our prior blog post detailing CVE-2025-0282 exploitation, GTIG has observed UNC5221 leveraging an obfuscation network of compromised Cyberoam appliances, QNAP devices, and ASUS routers to mask their true source during intrusion operations.

Conclusion

This latest activity from UNC5221 underscores the ongoing sophisticated threats targeting edge devices globally. This campaign, exploiting the n-day vulnerability CVE-2025-22457, also highlights the persistent focus of actors like UNC5221 on edge devices, leveraging deep device knowledge and adding to their history of using both zero-day and now n-day flaws. This activity aligns with the broader strategy GTIG has observed among suspected China-nexus espionage groups who invest significantly in exploits and custom malware for critical edge infrastructure.

Recommendations 

Mandiant recommends organizations immediately apply the available patch by upgrading Ivanti Connect Secure (ICS) appliances to version 22.7R2.6 or later to address CVE-2025-22457. Additionally organizations should use the external and internal Integrity Checker Tool (“ICT”) and contact Ivanti Support if suspicious activity is identified. To supplement this, defenders should actively monitor for core dumps related to the web process, investigate ICT statedump files, and conduct anomaly detection of client TLS certificates presented to the appliance.

Acknowledgements

We would like to thank Daniel Spicer and the rest of the team at Ivanti for their continued partnership and support in this investigation. Additionally, this analysis would not have been possible without the assistance from analysts across Google Threat Intelligence Group and Mandiant’s FLARE, we would like to specifically thank Christopher Gardner and Dhanesh Kizhakkinan of FLARE for their support.

Indicators of Compromise

To assist the security community in hunting and identifying activity outlined in this blog post, we have included indicators of compromise (IOCs) in a GTI Collection for registered users.

Code Family

MD5

Filename

Description

TRAILBLAZE

4628a501088c31f53b5c9ddf6788e835

/tmp/.i

In-memory dropper

BRUSHFIRE

e5192258c27e712c7acf80303e68980b

/tmp/.r

Passive backdoor

SPAWNSNARE

6e01ef1367ea81994578526b3bd331d6

/bin/dsmain

Kernel extractor & encryptor

SPAWNWAVE

ce2b6a554ae46b5eb7d79ca5e7f440da

/lib/libdsupgrade.so

Implant utility

SPAWNSLOTH

10659b392e7f5b30b375b94cae4fdca0

/tmp/.liblogblock.so

Log tampering utility

YARA Rules

rule M_APT_Installer_SPAWNANT_1
{ 
    meta: 
        author = "Mandiant" 
        description = "Detects SPAWNANT. SPAWNANT is an 
Installer targeting Ivanti devices. Its purpose is to persistently 
install other malware from the SPAWN family (SPAWNSNAIL, 
SPAWNMOLE) as well as drop additional webshells on the box." 
  
    strings: 
        $s1 = "dspkginstall" ascii fullword
        $s2 = "vsnprintf" ascii fullword
        $s3 = "bom_files" ascii fullword
        $s4 = "do-install" ascii
        $s5 = "ld.so.preload" ascii
        $s6 = "LD_PRELOAD" ascii
        $s7 = "scanner.py" ascii
        
    condition: 
        uint32(0) == 0x464c457f and 5 of ($s*)
}
rule M_Utility_SPAWNSNARE_1 {
    meta:
         author = "Mandiant"
        description = "SPAWNSNARE is a utility written in C that targets 
Linux systems by extracting the uncompressed Linux kernel image 
into a file and encrypting it with AES."

    strings:
        $s1 = "\x00extract_vmlinux\x00"
        $s2 = "\x00encrypt_file\x00"
        $s3 = "\x00decrypt_file\x00"
        $s4 = "\x00lbb_main\x00"
        $s5 = "\x00busybox\x00"
        $s6 = "\x00/etc/busybox.conf\x00"

    condition:
        uint32(0) == 0x464c457f
        and all of them
                  
}
rule M_APT_Utility_SPAWNSLOTH_2
{ 
    meta: 
        author = "Mandiant" 
        description = "Hunting rule to identify strings found in SPAWNSLOTH"
  
    strings: 
        $dslog = "dslogserver" ascii fullword
        $hook1 = "g_do_syslog_servers_exist" ascii fullword
        $hook2 = "ZN5DSLog4File3addEPKci" ascii fullword
        $hook3 = "funchook" ascii fullword
    
    condition: 
        uint32(0) == 0x464c457f and all of them
}


from Threat Intelligence https://ift.tt/IskWLVS
via IFTTT

VMware vSphere vSwitch Load Balancing Options: A Complete Guide

Disclaimer: 

This article has been updated with the most recent information relevant to VMware vSphere, including features and functionality as of 2025. It provides an overview of ESXi vSphere vSwitch load balancing options, highlighting their pros and cons. While the content is based on the latest version of vSphere at the time of publication, we recommend consulting VMware’s official documentation or release notes for any updates or changes. The material is intended for informational purposes and serves as an introductory guide. If you have suggestions for improving this guide, feel free to share your feedback! 

Introduction 

Previously, I discussed addressing NIC load balancing issues on an ESXi host and the utility of ESXCLI in that context. Since then, many colleagues have inquired about the differences between various load balancing methods and which one is optimal. Let’s explore and clarify the concepts of network load balancing at the infrastructure level. 

For starters, let’s quickly revisit what load balancing is all about. Here’s the deal: don’t mix up load balancing network traffic with balancing workloads for optimal performance; that’s the job of DRS or Distributed Resource Scheduler.  

NIC teaming technology in VMware combines two or more physical NICs into a single logical interface to increase the bandwidth of a vSphere virtual switch or a group of ports, thereby enhancing reliability. By configuring the failover procedure, you can choose how exactly traffic will be redirected in case of a failure of one of the NICs. Configuring the load balancing policy allows you to decide how exactly a vSwitch will load balance the traffic between NICs. 

So, what’s the takeaway? Load balancing is essentially the technology of uniting physical interfaces into one seamless logical connection. Although aggregation allows increasing channel bandwidth, you shouldn’t really count on perfect load balancing between all interfaces in the aggregated channel. Put simply, this tech is about smartly directing traffic from virtual machines (VMs) to vSwitches and down to pNICs. Whether it’s a vSwitch, pNIC, or a group of vNICs, there are a few tried-and-true methods to balance traffic:  

  • Route based on originating port ID 
  • Route based on IP hash 
  • Route based on source MAC hash 
  • Route based on physical NIC load 
  • Use explicit failover order 

Curious? Let’s dig deeper into each method and break them down in simple terms. 

Route Based on Originating Virtual Port ID 

This method is the default option for both standard and distributed vSwitches. It assigns an uplink based on the virtual port ID of the VM’s vNIC. Each VM is connected to a specific virtual port on the vSwitch, and the vSwitch maps this virtual port to a specific physical NIC (pNIC). This method ensures that one vNIC uses only one pNIC at any given time – simple and straightforward. 

Here’s how it works: each VM gets a unique identifier on the vSwitch. In order to assign an uplink port for a VM, vSwitch uses a similar port identifier on a network card or a group of network cards. When an uplink port is assigned, vSwitch distributes traffic for a VM through the same uplink port as long as this VM works on that switch. 

The virtual switch assigns the uplink port only once, a port identifier for a VM is fixed, so if vSwitch assigns a different group of ports to the VM, it generates a new uplink port. 

However, things don’t always stay static — a VM could be migrated, powered off, or even deleted. When that happens, its port identifier on vSwitch becomes available once again. Furthermore, vSwitch stops sending traffic to this port, which, in turn, lowers overall traffic distributed to the uplink port connected with it. However, if the VM is turned on or transferred, it may appear on another port and start using another uplink port.  

If all pNICs in the group are active, they distribute traffic for a VM. 

Now, let’s add a practical touch. Turn off VM 2 and VM 5 and then power on in the following order: VM 8, VM 9, VM 2, and VM 5. Guess what happens? You’ll see that the port identifier on Port Group 1 and Port Group 2 didn’t lose connection with pNIC uplink ports. In turn, VM 8 and VM 9 were connected to the uplink ports previously used by VM 2 and VM 5. It’s like musical chairs but for VMs and uplink ports! 

Pros: 

  • Simple physical switch configuration: no need for uplink binding (EtherChannel); only independent ports of the switch require configuration, keeping things simple and manageable. 
  • Equal distribution of bandwidth: when the number of vNICs exceeds the number of pNICs, this method ensures that each vNIC gets its fair share of bandwidth. 
  • Physical NIC redundancy: even if all pNICs are in active use, when one pNIC fails, the other pNICs in the team continue to balance traffic, ensuring your network stays up and running.  
  • Traffic balancing across multiple switches: physical NIC group traffic can be distributed between several physical switches, avoiding hardware failure and improving overall reliability. 
  • Beacon probing for failover detection: this load balancing type may use a network failover detection mechanism called beacon probing, enhancing the stability of your network environment.  
  • Load balancing in multi-VM environments: in environments with several VMs, the load is distributed across all active network cards, increasing overall performance. 

Cons: 

  • Limited Bandwidth per vNIC: A single vNIC cannot use the combined bandwidth of multiple pNICs. For example, if there are four pNICs in a group (1 Gb/s each), a VM with one vNIC can only utilize 1 Gb/s bandwidth through one pNIC. 
  • Not suitable for high client request volumes: this method isn’t ideal for virtual servers that handle a lot of requests from different clients when there’s a necessity to load balance traffic of one VM (with one vNIC) between several pNICs;  
  • No support for 802.3ad aggregation: this method doesn’t support 802.3ad channel aggregation technology and may cause issues with accessing IP storage (e.g., iSCSI, NFS) since VMkernel can also use only one pNIC to work with different iSCSI targets. 

Route Based on IP Hash 

This load balancing method distributes traffic by creating a hash (a fixed-size value) derived from the source IP address and the destination IP packet. This clever hashing mechanism ensures that traffic between a single VM and multiple clients, including through a router, can be balanced with different vmNICs. To enable this functionality, you’ll need to activate 802.3ad support on the physical switch connected to your ESXi Server. 

Among load balancing algorithms, IP hash is a star performer when it comes to efficiency. However, with great power comes complexity. The server shoulders a significant computational load since it calculates the hash for every IP packet. The hash calculation relies on the XOR algorithm and uses this formula: 

1  <em>((LSB (SrcIP) xor LSB (DestIP)) mod (# pNICs)</em> 

The load balancing equitability largely depends on the number of TCP/IP sessions between a host and different clients, also pNIC. When many connections are in play, this method ensures more even traffic distribution and avoids pitfalls inherent to the option based on ID-port. 

However, there’s a catch: if your host connects to multiple physical switches, you’ll need to aggregate all ports into a stack (EtherChannel). Without support for this mode on your physical switches, IP hash won’t be an option. In such situations, you might find yourself connecting all pNICs in the vSwitch to one physical switch. 

Here’s where you need to tread carefully. Relying on a single switch means introducing a single point of failure – if the switch goes down, the entire system follows suit. Think of it in advance. 

Another critical detail: while applying IP hash as a load balancing algorithm, you’ll need to perform configuration on vSwitch, and you don’t have to override it on the ports group level. In other words, ALL devices connected to vSwitch with IP hash load balancing should use IP hash load balancing. 

IP hash works best when there is a significant number of destination IP addresses in play. Otherwise, you’re risking encountering a situation when two or more requests instead of balancing will try to load the same pNIC. 

For instance, consider a scenario where a VM uses an iSCSI-connected disk from two SANs. If these 2 SANs have IP addresses that can be calculated with the same module value (look at the tab), then all traffic will load one pNIC, which, in turn, lowers the efficiency of using IP hash load balancing to a minimum. 

VM IP VM DestrIP XOR (SrcIP, DestIP) Modul pNIC
VM 1 x.x.x.10 z.z.z.20 (10 xor 20) = 30 mod 2 = 0 0 1
VM 1 x.x.x.10 z.z.z.30 (10 xor 30) = 20 mod 2 = 1 0 1

This approach works well when there’s a large number of destination IP addresses, but be mindful of the limitations when the distribution of IPs is not as diverse. 

Pros: 

  • Improved performance for multi-VM communication: when a VM communicates with multiple other VMs, it can theoretically utilize a bandwidth greater than what a single pNIC supports. 
  • Physical NIC redundancy: if a pNIC or uplink fails, the remaining NICs in the group will continue balancing traffic, ensuring uninterrupted network performance. However, synchronization is key the ESXi host and physical switch must recognize the channel as inactive so that the uplink could work properly. If there is any inconsistency, traffic won’t be able to switch to the other pNICs in the group. 

Cons: 

  • Less flexible switch configuration: physical switch configuration demands that ports be set up for EtherChannel static connections, which limits adaptability. Additionally, many switches don’t support EtherChannel across multiple physical switches, confining the pNIC group to a single switch.

Note: exceptions exist, such as specific stacks or modular switches which can actually do that on several switches or modules. Technologies like Cisco vPC (Virtual Port Channel) can address this issue, provided the switches support it. Talk to your vendor to get more information. 

  • Lacks beacon probing: this load balancing option lacks beacon probing for error detection. Instead, it relies solely on uplink port failure notifications, which may not provide as comprehensive a failover mechanism. 

Route Based on Source MAC Hash 

Now, let’s talk about a simpler yet equally intriguing load balancing method: Route Based on Source MAC Hash. This approach suggests that vSwitch selects an uplink port for a VM based on the MAC address of the VM. To calculate an uplink port for a VM, vSwitch applies LSB (Least Significant Bit) of the source MAC-address (vNIC MAC-address) according to the module of the number of active pNICs in vSwitch to receive an address in the pNIC array. 

Let’s break it down with an example: consider a setup with two pNICs and a vNIC with the MAC address 00:15:5D:99:96:0B. The LSB of the MAC address is 0x0B or 11 in decimal. In modulo operation, you split (using integer division) LSB MAC in the amount of pNIC (11 / 3), and pick the remainder (in this case, 2) as the modulus of the operation. The physical NIC array is based on 0, which means that 0=pNIC 1, 1=pNIC 2, 2=pNIC 3.  

name MAC LSB modul pNIC
VM 1 :39 = 57 0 1
VM 2 :6D = 109 1 2
VM 3 :0E = 14 2 3
VM 4 :5A = 90 0 1
VM 5 :97 = 151 1 2
VM 6 :F5 = 245 2 3
VM 7 :A2 = 162 0 1

Pros: 

  • More balanced load distribution: compared to the “Route Based on Originating Port ID” method, this approach ensures a more equitable load balancing as the vSwitch calculates an uplink port for each packet, ensuring better traffic distribution. 
  • Consistent uplink port assignment: all VMs use the same uplink port because their MAC addresses are static, meaning that powering a VM on or off doesn’t disrupt its uplink port assignment. 
  • No physical switch changes needed: this method eliminates the need for any configuration adjustments on physical switches, simplifying deployment and reducing setup time. 

Cons: 

  • Bandwidth limited by uplink port speed: the speed of the uplink port connected to a specific port identifier determines the bandwidth available to the VM unless the VM utilizes multiple vNICs with different MAC addresses. 
  • Higher resource consumption: this method is more resource-intensive than the routing based on originating port ID, as the vSwitch must calculate the uplink port for each packet. 
  • Potential uplink port overload: the virtual switch does not monitor the current load of uplink ports, increasing the risk of some ports becoming overloaded while others remain underutilized. 

Route Based on Physical NIC Load 

This load balancing method is exclusive to distributed switches, and while it may seem similar to the routing based on originating port ID, it brings some notable differences to the table. The primary distinction lies in how the pNIC for traffic balancing is selected. Instead of a static assignment, the choice is dynamically determined based on the current load on the pNIC. 

The system evaluates the load on each pNIC every 30 seconds. If the load on a specific pNIC exceeds 75%, the VM port identifier with the highest I/O operations switches to another uplink port of a less-loaded pNIC. Unlike other load-balancing methods where the port remains fixed once assigned, this approach adapts to changing traffic conditions. 

In simpler terms, this method isn’t traditional load balancing. It’s more like a smart failover scenario, redirecting traffic to the least busy uplink port from the list of active pNICs whenever necessary. 

Pros: 

  • Low resource consumption: the distributed switch calculates the uplink port for the VM only once, and periodic uplink checks minimally impact performance; 
  • Efficient load redistribution: the distributed switch actively monitors the uplink port load and shifts traffic to maintain balance where possible. 
  • No physical switch configuration required: this method works seamlessly without needing adjustments on the physical network side. 

Cons: 

  • Bandwidth constraints: the available bandwidth for a VM is determined solely by the uplink port connected to the distributed switch. 

Use Explicit Failover Order 

This policy takes a more straightforward approach, although it might come as a surprise to some – it essentially eliminates true load balancing. Here’s how it works: the vSwitch always selects the highest-priority uplink port from the list of available active NICs. If the first uplink port becomes unavailable, traffic shifts to the next one in the list, and so on. 

The failover order parameter is key here, defining the Active/Standby pNIC mode for the vSwitch. While simple, this method sacrifices the flexibility and efficiency of dynamic load balancing in favor of a more rigid, predictable behavior. 

Comparison of Load Balancing Policies

Method  Pros  Cons 
Route Based on Originating Virtual Port ID  Simplicity, Even Distribution, Redundancy, Multiple Switches  Bandwidth Limitation, Not Ideal for High Traffic VMs, No 802.3ad Support 
Route Based on IP Hash  Enhanced Performance, Redundancy  Complex Configuration, No Beacon Probing, Potential Imbalance 
Route Based on Source MAC Hash  Improved Distribution, Consistent Assignment, No Physical Switch Configuration Needed  Bandwidth Limitation, Resource Intensive 
Route Based on Physical NIC Load  Dynamic Load Distribution, Automatic Adjustment  vSphere Distributed Switch Requirement, Potential for Frequent Reassignments 
Use Explicit Failover Order  Predictability, Simplicity  No Load Balancing, Manual Configuration

Conclusion 

Each load balancing policy comes with its own set of advantages and drawbacks, and the best choice depends entirely on your specific needs. If you’re new to this topic, starting with the originating port ID method (the default option) is a good idea – it’s simple, effective, and a great introduction to how load balancing works. 

As your understanding grows, you can experiment with other methods to find the one that aligns best with your workload and infrastructure requirements. 

I hope this explanation helps clarify these load balancing methods for you. If you’re eager to dive deeper, VMware’s official guides are an excellent next step. And if you have suggestions or ideas for improving this material, feel free to share them – I’m all ears! 

 



from StarWind Blog https://ift.tt/nfxbkL3
via IFTTT