Friday, April 19, 2024

Hackers Target Middle East Governments with Evasive "CR4T" Backdoor

Apr 19, 2024NewsroomCyber Espionage / Threat Intelligence

Government entities in the Middle East have been targeted as part of a previously undocumented campaign to deliver a new backdoor dubbed CR4T.

Russian cybersecurity company Kaspersky said it discovered the activity in February 2024, with evidence suggesting that it may have been active since at least a year prior. The campaign has been codenamed DuneQuixote.

"The group behind the campaign took steps to prevent collection and analysis of its implants and implemented practical and well-designed evasion methods both in network communications and in the malware code," Kaspersky said.

The starting point of the attack is a dropper, which comes in two variants -- a regular dropper that's either implemented as an executable or a DLL file and a tampered installer file for a legitimate tool named Total Commander.

Regardless of the method used, the primary function of the dropper is to extract an embedded command-and-control (C2) address that's decrypted using a novel technique to prevent the server address from being exposed to automated malware analysis tools.

Specifically, it entails obtaining the filename of the dropper and stringing it together with one of the many hard-coded snippets from Spanish poems present in the dropper code. The malware then calculates the MD5 hash of the combined string, which acts as the key to decode the C2 server address.

The dropper subsequently establishes connections with the C2 server and downloads a next-stage payload after providing a hard-coded ID as the User-Agent string in the HTTP request.

"The payload remains inaccessible for download unless the correct user agent is provided," Kaspersky said. "Furthermore, it appears that the payload may only be downloaded once per victim or is only available for a brief period following the release of a malware sample into the wild."

The trojanized Total Commander installer, on the other hand, carries a few differences despite retaining the main functionality of the original dropper.

It does away with the Spanish poem strings and implements additional anti-analysis checks that prevent a connection to the C2 server should the system have a debugger or a monitoring tool installed, the position of the cursor does not change after a certain time, the amount of RAM available is less than 8 GB, and the disk capacity is less than 40 GB.

CR4T ("CR4T.pdb") is a C/C++-based memory-only implant that grants attackers access to a console for command line execution on the infected machine, performs file operations, and uploads and downloads files after contacting the C2 server.

Kaspersky said it also unearthed a Golang version of CR4T with identical features, in addition to possessing the ability to execute arbitrary commands and create scheduled tasks using the Go-ole library.

On top of that, the Golang CR4T backdoor is equipped to achieve persistence by utilizing the COM objects hijacking technique and leverage the Telegram API for C2 communications.

The presence of the Golang variant is an indication that the unidentified threat actors behind DuneQuixote are actively refining their tradecraft with cross-platform malware.

"The 'DuneQuixote' campaign targets entities in the Middle East with an interesting array of tools designed for stealth and persistence," Kaspersky said.

"Through the deployment of memory-only implants and droppers masquerading as legitimate software, mimicking the Total Commander installer, the attackers demonstrate above average evasion capabilities and techniques."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/zsJMqhT
via IFTTT

Transatlantic Cable podcast episode 343 | Kaspersky official blog

Episode 343 of the Transatlantic Cable podcast begins with news that Instagram is testing a tool to help tackle ‘sextortion’, or intimate image abuse. Following that, the team discuss how criminals are increasingly using A.I to defraud consumers out of their money.

The last two stories look at X and ransomware. The first story focuses on how X is automatically removing “twitter” from URLs, providing scammers with a real opportunity – finally, the last story looks at how some ransomware gangs are trying their luck at calling the front desk of businesses, to try to leverage payment out of them – however, it doesn’t always go to plan.

If you like what you heard, please consider subscribing.



from Kaspersky official blog https://ift.tt/Gn2e0XF
via IFTTT

Thursday, April 18, 2024

ATT&CK 2024 Roadmap

Enhancing usability, expanding scope, optimizing defenses

2023 was dynamic year for ATT&CK. We marked a decade of progress since the framework’s inception and achieved some key milestones to make ATT&CK more accessible for a wider community. Our scope (slightly) expanded to encompass activities adjacent to direct Enterprise interactions, such as non-technical, deceptive practices and social engineering techniques (Financial Theft, Impersonation, and Spearphishing Voice). We enhanced detection capabilities with integrated notes, pseudocode from CAR, and BZAR-based analytics. The ICS matrix welcomed the addition of Assets to enhance inter-sector communication and mapping. We rolled out Mobile-specific data sources, structured detections, and behaviors like smishing, quishing, and vishing. Website navigation was improved, along with a faster Search bar, and updates that hit you faster than you can say “resources/changelog.html”. We also maintained a steady cadence of updates and new content from the ATT&CK team and external contributors.

In October, we successfully held ATT&CKcon 4.0, with new insights shared and realistic applications demonstrated by practitioners. And finally, we kickstarted the ATT&CK Benefactor program.

ATT&CKcon 4.0 Themed Snacks

2024 Roadmap: Vision & Goals

Since launching ATT&CK, we’ve been humbled to witness how the community has integrated it across widely varied spheres and around the globe. The vision for ATT&CK has always been to enable the broadest use across the widest spectrum of stakeholders — whether you’re cross-mapping between domains, annotating and developing tailored Navigator layers, or using the framework as a blueprint to build multi-platform threat models. ATT&CK was designed to empower defenders precisely where they need it most. This is the core thesis for ATT&CK, and as its stewards, we’ll continue prioritizing measures that advance a more inclusive, relevant, and actionable framework.

In line with this vision, our 2024 goals are to bolster broader usability and enhance actionable defensive measures for practitioners across every domain. This includes exploring scope adjustments and platform rebalancing, as well as implementing structural modifications with the introduction of ICS sub-techniques. A core focus will be reinforcing defensive mechanisms and optimizing their user-friendliness. We’ll be bridging Linux and macOS information gaps and enhancing prominent adversary representation. The ATT&CK Navigator, Workbench, and website will feature reengineering to improve accessibility and enable swifter ATT&CK Group/Software/Campaign updates. We’ll also be sunsetting the TAXII 2.0 server by December 18 in favor of the upgraded TAXII 2.1 version. Finally, we’ll continue amplifying the key driver behind ATT&CK — community collaboration. This includes hosting ATT&CKcon 5.0 in October, and maintaining support for the European Union (EU) and Asia-Pacific (APAC) ATT&CK Community Workshops.

Enterprise | Integrated Defense

In tune with ATT&CK’s vision, we’re continuously re-evaluating Enterprise’s scope to more accurately reflect the threats faced by real defenders. Matrices and platforms are conceptual schematics, not real-world structures, and we’re assessing realignments, expansions, and refinements of platforms to represent interconnected organizations, the adversaries they encounter, and the reality of defenders. Our goal is to advance a cohesive and integrated framework that provides more functional use cases and empowers users to visualize and create adaptable defenses against cross-platform threats.

Cloud | Matrix Balance & More Actionability

Our Cloud goal this year is to enable defenders (both new and seasoned) to better leverage the Cloud matrix for defensive action. This includes focusing on emerging and significant threats to the domain, upgrading Cloud analytics, and optimizing the balance between generalization and detail in the matrix.

With a considerable portion of cloud identities retaining super admin access, and the frequency of identity-related intrusions across the domain, we’ve been reinforcing and creating more detailed techniques for identity-based attacks. We’ll also be diving into the exploitation of Continuous Integration/Continuous Deployment (CI/CD) pipelines and the malicious use of Infrastructure as Code (IaC). Our Cloud analytics effort will elevate your actionability, by outlining the steps to detect specific behaviors, and providing additional context on what to find and collect.

We’ll also be evaluating how to best refine the balance between abstraction and specificity in the matrix. Our exploration will assess if the platforms are broad enough to cover a wide range of cloud environments and threats, yet specific enough to inform defensive actions. This balance is crucial for the matrix to remain practical and useful for defenders operating in diverse cloud environments. Our aim is to make navigating the Cloud matrix more intuitive and enable users to prioritize techniques relevant to their specific platform.

Ready to navigate the Cloud with us? Sail over to #cloud_attack.

macOS/Linux | Countermeasures for Priv Esc and Defense Evasion

Our goal for Linux and macOS is to equip practitioners with more robust countermeasures and help bridge the information gap on defending these systems. We’ll continue tracking down in-the-wild adversary behaviors and building more macOS and Linux-only (sub)techniques to optimize defensive arsenals. For Linux we’ll be exploring privilege escalation and defense evasion to better align with in-the-wild adversary activity. On the macOS side, we’ll be strategically bolstering the platform, with a particular emphasis on threats associated with elevated permissions.

If you have intelligence or technique ideas, we would love to collaborate. We rely on the practitioners who work with these systems day-in and day-out to help us identify gaps and provide invaluable insights. Ready to contribute? email us and join our #linux_attack or #macos_attack slack channel.

Defensive Coverage | Upgrading, Converting & Restructuring Defensive Measures

Our Defensive goal this year is to expand detections and mitigations to help you better optimize your detection engineering — and maybe get a little more actionable. The April release will include both new and updated mitigations that incorporate best practices from contributors, and industry standards meticulously mapped by our defense team.

Over the past few months, we’ve also been examining analytic language approaches. Our aim? Transforming detection logic into formats compatible with different security tools, including more consistent with real-world query languages such as Splunk . This will simplify the process of aligning your SIEM data with ATT&CK detections, making it easier to understand. We’re also incorporating data collection sources for a given detection query. For example, pulling information from Windows Event logs or Sysmon and the associated Event Code. The new analytic style in ATT&CK will overhal the previously used CAR-like pseudocode, and will be the model for future analytics. This will enhance compatibility across various environments and help you hunt threats more efficiently.

Lately, we’ve been prioritizing improving detections under the Execution tactic, where some of the most employed techniques fall. v15 will showcase a subset of these enhanced detections, featuring the trifecta of CAR (Cyber Analytics Repository) pseudocode, BZAR-based analytics (Bro/Zeek ATT&CK-based Analytics and Reporting) and detection notes.

Gearing up for October, we’ll be completing the enhanced detections for Execution, sculpting out Credential Access detections, exploring the universe of Cloud analytics, and navigating how to restructure our data sources for improved accessibility. This means sprucing up data source definitions and matching them to everyday use cases like sensor mappings. This way, you can more easily identify the tools and events that clue you in on shady activity. Additionally, you can opt for the data sources that best align with your specific needs. The revamp will also include the introduction of STIX IDs for data components, making it more intuitive to reference and integrate data sources.

Join our ranks at #defensive_attack channel.

ICS | Subs, Asset Expansion, & Cross-Domain Integration

ICS is leveling up this year. Our goals include broadening ICS horizons with new asset coverage, exploring platform scope expansion, and continuing our multi-domain integration quest. We’ll also be diving deeper into adversary behaviors with the introduction of sub-techniques. v15 will showcase some of integration efforts, with the release of cross-mapped campaigns. These campaigns track IT to OT attack sequences, helping defenders better understand multi-domain intrusions and informing unified defense strategies across technology environments.

The October release will feature a structural shake-up, with the first tranche of the long-awaited sub-techniques. Like Enterprise and Mobile sub-techniques, ICS subs will break down techniques into more detail. This increased granularity allows defenders to understand the nuances of adversaries’ execution of a given technique, enhancing their ability to detect and mitigate them. The technique restructuring will involve modifying the name and scope of techniques and integrating them more effectively with other domains. This integration will foster a more comprehensive defensive approach on both the right and left of launch. You can expect a subs crosswalk to help you understand our decisions and how things map between deprecated and new techniques.

October will also include some additional treats with Asset coverage expansion, building upon the Asset refactoring in v14. The refactoring strived to provide a clearer picture of the devices, systems, or platforms a specific technique could target and introduced the concept of Related Assets. Related Assets links cross-sector Assets that share similar functions, capabilities, and architectural locations/properties, highlighting that they may also be susceptible to the same techniques. v16 will feature additional Related Assets, as well as more in-depth definitions and refined mappings of technique relationships for different devices and systems. You can start leveraging Assets for your defensive activities by viewing the technique mappings from Asset pages, or by reviewing Asset mappings from a technique page. We’ll also be scouting how to incorporate additional sectors such as such as maritime, rail, and electric.

We welcome input from all sectors on how to improve identification of key assets and any additional adversary behaviors you have observed in the wild. Reach out to us at attack@mitre.org or #ics_attack

Mobile | Detections & Mitigations Optimization + PRE Exploration

Mobile’s goal is to dial up the pre-and-post-compromise defensive measures this year, with a detections and mitigations upgrade and an exploratory mission into pre-intrusion behaviors for the matrix. We introduced Mobile structured detections in v14 and will continue building out structured detections as well as expanding our mitigations across the matrix. For optimal actionability, we’ll be leveraging the best practices and tangible experiences from the mobile security community.

In the coming months we’ll also be evaluating how to enhance inter-domain connectivity across platforms and exploring integrating proactive tactics into the Mobile matrix. Our goal is to better reflect evolving adversary activity targeting the domain. This research quest will examine adversary actions before attacks, like active and passive Reconnaissance, and acquiring or developing resources for targeting purposes.

Collaboration and knowledge-sharing with the community will to be a driver for Mobile’s development in 2024. In addition to ramping up detections and mitigations, we’re particularly interested in partnering with mobile defenders to examine potential areas where communications platforms or domains could be added into ATT&CK. If you’re interested, connect via attack@mitre.org or join #mobile_attack.

Software Development | Enhanced Usability & Streamlined Workflows

Our Software goals this year are to increase usability across ATT&CK Workbench and Navigator, and streamline Groups and Software releases. Adversaries evolve quickly, so we’re optimizing Workbench workflows to harmonize Group and Software releases more closely to their cadence. This includes developing enhanced search capabilities, improving ATT&CK object-collection association, and overhauling the Collection Manager UI for the ATT&CK Workbench. These renovations will fine-tune the approval of ATT&CK object changes and the matching of collection bundle differences with official ATT&CK changelog types, resulting in swifter releases.

For ATT&CK Navigator, we’re refining the user experience, and the experience of anyone reading your reports. We’ll be upgrading SVG export function for sleeker output designs, providing smoother navigation with intuitive export controls, and rolling out an in-website tutorial for mastery of all the key features. We’ll also be updating the official content source to the STIX 2.1 repository — making everything a little more robust and flexible.

Finally, we’re taking our TAXII server to the next level! We’ll be sunsetting the TAXII 2.0 server by December 18, as we transition to the upgraded TAXII 2.1 version. You can access the documentation for TAXII 2.1 server in our GitHub repository. Remember to switch URLs for TAXII 2.1 clients to connect to https://attack-taxii.mitre.org instead of https://cti-taxii.mitre.org. And get ready to experience enhanced features and smoother operations.

Cyber Threat Intelligence | More Cybercriminal, Underrepresented Groups

With CTI, our mission is to better reflect the reality of the threat landscape by infusing more cybercriminal and underreported adversary activity into the framework. By bridging gaps in representation and minimizing those unknowns, we aim to provide defenders with better insights and tools to counter a wider array of threats. A pivotal aspect of this effort includes gap assessments of Groups, Software, and Campaigns. These evaluations will help us pinpoint any disparities between the current content and the reality of adversary activities.

Our releases this year will feature more cybercriminal operations and under-monitored regions, including Latin America, offering a more nuanced understanding of global threats. We’re also collaborating with ATT&CK domain leads to expand coverage of cross-domain intrusions to inform a more unified approach to undermining adversaries.

To join this quest, engage at attack@mitre.org

Community Collaboration

ATT&CK Community Workshops | Practitioner-led Forums for Activating ATT&CK

We’re always inspired to see how ATT&CK is being used in innovative ways to upgrade defensive capabilities. The regional ATT&CK community workshops — organized by practitioners, for practitioners — provide forums to share insights, use cases, and collaborative approaches for leveraging ATT&CK.

ATT&CKcon 5.0 | Great Speakers, Content, & Conversations around ATT&CK

ATT&CKcon 5.0 will be arriving in October, featuring both virtual and in-person attendance from McLean, VA. Stay tuned to our Twitter and LinkedIn channels for updates on our Call for Presentations, which will open in the coming months, followed by our illustrious speaker lineup. If your organization is thinking about joining the ATT&CKcon adventure as a sponsor, please reach out to us at attackcon@mitre.org.

Benefactor Program | Empowering Defenders, Sustaining Independence

We want to take a moment to share some insights into the foundational tenants and financial realities of ATT&CK. Much like we crowd-source intelligence and rely on community contributions, ATT&CK itself was built to be independent, responsive, and part of the global community.

From the outset, we deliberately chose not to align ATT&CK with any specific government department or agency. This decision was made to maintain autonomy, flexibility, and to foster collaboration across the broadest spectrum of stakeholders. While this approach has facilitated agility and international partnerships, it also means that ATT&CK lacks a dedicated funding source.

To bridge this funding gap and ensure the continuity of our operations, as well as expanding into new domains, we launched the Benefactor Program last year. This program enables tax-deductible, charitable donations from individuals and organizations who believe in ATT&CK’s mission. These contributions allow us to continue offering free and accessible services while also advancing our capabilities and scope.

We are immensely grateful for the support we have received thus far from initial benefactors SOC Prime, Tidal Cyber, and Zimperium. We remain committed to serving the community with transparency; whether you’re a contributor, a fellow defender, or just getting started, we thank you for being part of ATT&CK’s journey.

Looking Forward

Mark your calendars for the v15 release on April 23! You’ll see some novel content interspersed with familiar elements, as well as more practical defensive measures.

As always, we value the opportunity to collaborate with you in ensuring that ATT&CK remains a living framework, where each contribution, conversation, or new implementation fuels its evolution. We look forward to continuing this adventure with you.

Connect with us on email, Twitter, LinkedIn, or Slack.

©2024 The MITRE Corporation. ALL RIGHTS RESERVED. Approved for public release. Distribution unlimited 24–00779–2.


ATT&CK 2024 Roadmap was originally published in MITRE ATT&CK® on Medium, where people are continuing the conversation by highlighting and responding to this story.



from MITRE ATT&CK™ https://ift.tt/Rbrpa16
via IFTTT

How secret scanning works

Secret scanning is crucial for securing an enterprise’s security management lifecycle. Secret scanning helps identify and prevent security threats posed by exposed sensitive information, passwords, API keys, and other credentials.

GitHub’s Octoverse highlights several mediums where sensitive information may be exposed, including code, configuration tools, CI/CD platforms, and communication channels used to collaborate. When discovered bad actors, this type of information can be used to access systems and associated data, resulting in data breaches and other security incidents.

Secret scanning solutions proactively identify potential security threats so they can be remedied before they can be exploited. Scanning solutions search code repositories, commits, configuration tools, and other data sources for sensitive information, passwords and access keys.

Secret scanning is a key component of modern security strategies that helps organizations in several ways, including:

  • Preventing data breaches: Secret scanning solutions help prevent data breaches by identifying and remediating threats before leaked passwords or API keys can be exploited.
  • Improving compliance: Many organizations are subject to regulatory frameworks designed to protect sensitive information, including publicly identifiable information (PII).
  • Protecting reputations: Security incidents can significantly damage an organization's reputation, which affects their ability to conduct business and negatively impacts revenue.
  • Cost reduction and avoidance: Organizations affected by data breaches or other security incidents incur significant costs. The primary costs associated with security incidents include legal fees, remediation, additional auditing, and lost business. IBM’s Costs of a Data Breach Report estimates that the average cost of a data breach is $4.45 million.

Approaches to secret scanning

There are multiple programmatic strategies to secret scanning, including:

  • Regular expression scanning: Regular expressions are simply a sequence of characters that specify a matching pattern in text. Regular expressions scans are useful for evaluating types of sensitive information that often follow a pattern like API keys or access tokens.
  • Dictionary scanning: The dictionary approach involves using pre-defined data sources of known secrets to identify potential vulnerabilities. The data source may originate from a series of secrets directly entered into the scanning solution or a secrets management platform like HashiCorp Vault. Dictionary scanning focuses on evaluating log files, code repositories, and configuration tools. This approach to scanning is especially effective when a secrets management tool is used as the dictionary’s data source. This approach allows users to understand if a secret is current or no longer in use — something regular expression scanning cannot do.
  • Hybrid scanning: Hybrid scanning combines multiple evaluation approaches. Using regular expression and dictionary-based scanning together would be an example of hybrid scanning. More scanning approaches increases the effectiveness of secret scanning and detects a broader range of secrets while also raising fewer false positives.

Common locations for secrets

Secrets can hide in many places, so it’s important to scan for secrets in the most likely locations:

  1. Code: Developers may accidentally hard-code sensitive information, passwords, or API keys directly into their application code or configuration files.
  2. Collaboration tools: Venues like Confluence, JIRA, Slack, and other tools are used for collaboration between DevOps, DevSecOps, Platform Ops, and business partners. During collaboration, secrets and sensitive information may be left on these platforms, which may fall outside the organization’s security policies.
  3. Container images: Container images are a common location to hold hard-coded secrets, which can make containers vulnerable to attackers. When a developer uses base images, especially from a public registry like Docker Hub, they are leveraging external code that should be considered untrusted. These containers may contain hard-coded secrets.
  4. The broader technology stack: A DevOps stack includes a wide range of tools and services, such as repositories, build systems, and deployment pipelines, that may contain secrets that are not properly protected.

HCP Vault Radar as your secrets scanning solution

HCP Vault Radar, now in limited availability, is an extension to the HashiCorp Vault secrets management platform that conducts ongoing reconnaissance of unsecured secrets stored as plaintext in code repositories, configuration tools, DevOps tools, and collaboration tools.

HCP Vault Radar employs a hybrid scanning approach using both regular expressions and dictionaries to find leaked secrets and sensitive information. This broad set of evaluation techniques makes HCP Vault Radar an effective secrets scanning solution that can significantly reduce your organization’s attack surface and risk of data breach.

HCP Vault Radar focuses on several areas to ensure its effectiveness in secrets discovery.

Developer experience: HCP Vault Radar supports Git-based source control tools like GitHub, GitLab, and BitBucket. It can be automated to conduct scans over code repositories but also supports a developer’s native workflow by scanning commits and pull requests.

Coverage: HCP Vault Radar provides comprehensive coverage of relevant locations where secrets may be found. Supported locations include:

  • Code repositories
  • Container images
  • Configuration files and tools like AWS Parameter Store
  • Amazon S3
  • Confluence
  • File directories
  • Databases

Wide coverage helps ensure vulnerabilities are identified across all areas of the software supply chain process, and broad integrations ensure that exposures can be remedied within common development workflows.

Accuracy: HCP Vault Radar leverages hybrid approach using a broad array of scanning approaches, including:

  • Scanning for known patterns
  • Testing for high entropy
  • Check for liveness whenever possible
  • Ignore rules that can help users tune secrets detection

This hybrid secret scanning approach reduces both false positives and false negatives. False positives can lead to wasted time and effort on the exploration and remediation of non-existent issues, while false negatives can leave vulnerabilities undetected and expose the organization to risk.

Monitoring and alerting: HCP Vault Radar provides monitoring and alerting capabilities to enable quick detection and remediation. Real-time alerts and notifications can be configured to fire when vulnerabilities are identified, and can be integrated with existing incident-response workflows.

Prioritization: HCP Vault Radar is a risk-based code security platform that prioritizes evaluation results based on the presence of:

  • High-risk content in code (secrets in code, PII, and other high risk content)
  • Risks due to misconfigurations of Git repositories
  • Non-conformance with access and identity governance best practices

Customization: You can customize HCP Vault Radar’s scanning rules to meet the specific needs of your organization. This includes defining custom rules for identifying and prioritizing the sensitive data it discovers..

Getting started

HCP Vault Radar is an exciting new addition to HashiCorp Vault’s secret lifecycle management capabilities that helps enterprises reduce risk associated with credential exposure. Discovery of unmanaged secrets and subsequent remediation workflows further differentiate Vault’s secrets lifecycle management offering by enabling organizations to take a proactive approach to remediation before a data breach occurs.

To learn more, check out these resources:



from HashiCorp Blog https://ift.tt/0neVZFT
via IFTTT

OfflRouter Malware Evades Detection in Ukraine for Almost a Decade

Apr 18, 2024NewsroomIncident Response / Cyber Espionage

Select Ukrainian government networks have remained infected with a malware called OfflRouter since 2015.

Cisco Talos said its findings are based on an analysis of over 100 confidential documents that were infected with the VBA macro virus and uploaded to the VirusTotal malware scanning platform.

"The documents contained VBA code to drop and run an executable with the name 'ctrlpanel.exe,'" security researcher Vanja Svajcer said. "The virus is still active in Ukraine and is causing potentially confidential documents to be uploaded to publicly accessible document repositories."

A striking aspect of OfflRouter is its inability to spread via email, necessitating that it be propagated via other means, such as sharing documents and removable media, including USB memory sticks containing the infected documents.

These design choices, intentional or otherwise, are said to have confined the spread of OfflRouter within Ukraine's borders and to a few organizations, thus escaping detection for almost 10 years.

It's currently not known who is responsible for the malware and there are no indications that it was developed by someone from Ukraine.

Whoever it is, they have been described as inventive yet inexperienced owing to the unusual propagation mechanism and the presence of several mistakes in the source code.

OfflRouter has been previously highlighted by MalwareHunterTeam as early as May 2018 and again by the Computer Security Incident Response Team Slovakia (CSIRT.SK) in August 2021, detailing infected documents uploaded to the National Police of Ukraine's website.

The modus operandi has remained virtually unchanged, with the VBA macro-embedded Microsoft Word documents dropping a .NET executable named "ctrlpanel.exe," which then infects all files with the .DOC (not .DOCX) extension found on the system and other removable media with the same macro.

"The infection iterates through a list of the document candidates to infect and uses an innovative method to check the document infection marker to avoid multiple infection processes – the function checks the document creation metadata, adds the creation times, and checks the value of the sum," Svajcer said.

"If the sum is zero, the document is considered already infected."

That said, the attack becomes successful only when VBA macros are enabled. Microsoft, as of July 2022, has been blocking macros by default in Office documents downloaded from the internet, prompting threat actors to seek other initial access pathways.

Another key function of the malware is to make Windows Registry modifications so as to ensure that the executable runs every time upon booting the system.

"The virus targets only documents with the filename extension .DOC, the default extension for the OLE2 documents, and it will not try to infect other filename extensions," Svajcer elaborated. "The default Word document filename extension for the more recent Word versions is .DOCX, so few documents will be infected as a result."

That's not all. Ctrlpanel.exe is also equipped to search for potential plugins (with the extension .ORP) present on removable drives and execute them on the machine, which implies the malware is expecting the plugins to be delivered via USB drives or CD-ROMs.

One the contrary, if the plugins are already present on a host, OfflRouter takes care of encoding them, copying the files to the root folder of the attached removable media with the filename extension .ORP, and manipulating them to make them hidden so that they are not visible through the File Explorer when plugging them into another device.

That said, one major unknown is whether the initial vector is a document or the executable module ctrlpanel.exe.

"The advantage of the two-module virus is that it can be spread as a standalone executable or as an infected document," Svajcer said.

"It may even be advantageous to initially spread as an executable as the module can run standalone and set the registry keys to allow execution of the VBA code and changing of the default saved file formats to .DOC before infecting documents. That way, the infection may be a bit stealthier."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/LkDK3Bv
via IFTTT

Announcing the Incoming 2024 Citrix Technology Professional (CTP) Class

It’s 2006. Citrix MetaFrame Presentation Server and Password Manager are both mainstream products deployed across enterprise IT. Citrix has also just acquired a company called Reflectent which later would go on to become the Edgesight performance and load-testing solution. I have fond memories of supporting customers with these technologies upon joining Citrix those many years ago. That same year, there was another milestone moment that featured a different kind of release – the introduction of the Citrix Technology Professional (CTP) Program. A program for Citrix’s most elite technical champions, content producers, evangelists, and so much more.

18 years later as we’re well into 2024, CTPs have contributed in countless ways across the ecosystem. Whether writing blogs, hosting webinars, presenting at conferences, building scripts, providing product feedback, or contributing any number of other ways, the CTP Program and culture is truly something special that we’re very proud of. Personally, I’ve had the privilege of interacting with a number of CTPs over the years – even presenting alongside them at industry events. Their passion and commitment to Citrix technologies is just incredible (for some I occasionally wonder when they sleep ).

At the start of the year, I was given the opportunity to lead the CTP Program as the new Program Manager. I am incredibly excited yet humbled by this opportunity which has presented itself. I would like to thank my predecessors and everyone who has worked so hard over the years to elevate the program to where it is today. I pledge to do my best to keep the program true to its core principles of technical excellence, a collaborative spirit, and selfless sharing.

Without further ado, I’m incredibly pleased to welcome the new CTPs for 2024! These individuals were chosen based on their accomplishments, alignment to program goals, and peer feedback from current program members. A big congratulations to you all! I encourage everyone to reach out to these individuals on social media to amplify their well-deserved recognition.

Gaby Grau

Gaby works for Nutanix as an Advisory Solution Architect for EUC. She has decades worth of experience and is often found presenting on Nutanix and Citrix technologies around the globe. She’s passionate about creating the most impactful content possible and helping ensure others are successful. She’s also proud to represent “Women in Tech” telling her story of accomplishments and inspiring other women to pursue a career within the industry.

Jon Bucud

Jon is a former CTA and a steady contributor to Reddit, #CitrixIRC, the World of EUC, X (Twitter), and local technical events. He is committed to “elevating user experiences” including embracing many of Citrix’s HDX capabilities and seamless integration of CVAD and NetScaler. While maintaining strong technical aptitude, Jon also realizes the importance of fostering partnerships and collaborations within the ecosystem to ensure that business expectations are well-aligned to IT’s technical opportunities.

Mahammad Kubaib

Mohammad is an experienced cloud architect with a passion for teaching and making technical concepts clear to understand and implement. He has developed popular Udemy courses on Citrix technologies, wrote numerous technical blog posts on Citrix best practices and recommendations, and enjoys contributing to the broader Citrix ecosystem any way he can as an industry expert and public speaker. Working in the financial services space, he has real-world involvement with leveraging Citrix to elevate security standards and meet compliance requirements.

Ray Davis

Ray is a Florida-native who has a profound love for technology, blogging, and sharing practical knowledge with fellow IT enthusiasts. He’s also a former CTA and champion across many areas of the Citrix technology stack. Ray has presented and moderated a number of Citrix webinars on topics ranging from optimizing performance to system automation. Working in a consulting capacity, he’s helped organizations solve critical app delivery challenges across various use cases and verticals.

Serdar Göksu

Serdar is considered a global SME for VDI technologies with extensive experience using Citrix. He is a Citrix Certified Instructor and former CTA who enjoys sharing his technical knowledge on his blog site and through social media. Serdar helps to organize webinars on Citrix topics and learns the details of each Citrix platform and release to allow organizations to easily embrace the latest capabilities within their infrastructure. 

Current CTPs

The following individuals have been renewed as CTPs for the 2024 calendar year. Thank you all for your continued contributions and commitment to Citrix technologies!

Adam Clark, Alex Cooper, Alexander Ervik Johnsen, Andy Paul, Anton van Pelt, Arnaud Pain, Bart Jacobs, Benjamin Crill, Benny Tritsch, Carl Stalhood, Carl Webster, Craig Stones, Dane Young, Donald Wong, Eduardo Molina, Esther Barthel, Fredrik Brattstig, George Spiers, Guy Leech, Henry Heres, James Kindon, James Rankin, Jan Tytgat, Jarian Gibson, Joe Shonk, Julian Jakob, Julian Mooren, Kees Baggerman, Leee Jeffries, Mads Behrendt Petersen, Manuel Winkel, Matthias Schlimm, Mick Hilhorst, Mike Streetz, Neil Spellings, Patrick Coble, Patrick van den Born, Remko Weijnen, Rene Bigler, Ryan Ververs-Bijkerk, Sacha Thomet, Sam Jacobs, Samuel Legrand, Sarah Vogt, Scott Osborne, Shane Kleinert, Shane O’Neill, Steve Greenberg, Thomas Krampe, Thomas Poppelgaard, Thomas Preischl, Thorsten Rood, Tim Mangan, Trond Eirik Haavarstein

For a full list of all current awardees, please visit the CTP awardees page. More information about the program is available on the CTP Program page. If you think you have what it takes to join the CTP Program, look for the 2025 Class application blog coming later this year.



from Citrix Blogs https://ift.tt/DATyLaK
via IFTTT

Getting Git submodules in private Azure DevOps repositories to work in a pipeline

Introduction

Recently, I embarked on a journey to tackle some Infrastructure as Code and Azure DevOps challenges. My solution? Git submodules. This approach proved practical and efficient, making it a fitting choice for my objectives.

My journey with Azure DevOps and Git submodules was not without its hurdles. However, overcoming these challenges led to a functional pipeline and ensured all security options were intact. With some perseverance and problem-solving, you can accomplish this as well.

A quick overview of the setup

I have a main repository where our pipeline lives. That repo has two Git submodules added, which link to their repositories in the same Azure DevOps project.

Adding the submodules to the main repos was done as follows while in the root of our main Git repo.

git submodule add
https://workinghardinit@dev.azure.com/workinghardinit/InfraAsCode/_git/AzureFwChildPolMarShip
.\IAC\Up\Shared\bicep\AzureFwChildPolMarShip

git submodule add ../AzureFwChildPolMarShip
.\IAC\Up\Shared\bicep\AzureFwChildPolMarShip

That is as documented in Git – git-submodule Documentation (git-scm.com), but doing this led to my first problem.

Adding the submodules to the main repos was done as follows while in the root of our main Git repo

The folder structure of the repositories looks like the one in the picture below when cloned to my workstation.

The folder structure of the repositories looks like the one in the picture below when cloned

It shows up in the main Azure DevOps repo as a reference to the submodules repo with a commit ID.

Azure DevOps repo as a reference to the submodules repo with a commit ID

Since I need the files in the submodules to be cloned and made available during our deployment of Azure infrastructure, I added the “submodules: true” key to the checkout step in the checkout stage of my YAML pipeline.

stages:

- stage: checkout

jobs:

- job:

steps:

- checkout: self

submodules: true

- task: PublishBuildArtifacts@1

inputs:

PathtoPublish: '$(Build.Repository.LocalPath)'

ArtifactName: 'iac'

publishLocation: 'Container'

- stage: dev

dependsOn: checkout

displayName: Development

jobs:

- deployment:

Those are the relevant details of the repositories and the pipeline so far. At least, that’s what I started with. It was not enough.

Problem 1 – The checkout stage fails with an error when cloning the submodules

While running my pipeline, I encountered a problem with some errors that were confusing to me. The pipeline is trying to prompt me for a password, but it can’t. But I authenticated already, and it should not prompt me.

Cloning into ‘D:/a/1/s/IAC/Up/Shared/bicep/AzureFwChildPolFleetMgnt’…
fatal: Cannot prompt because user interactivity has been disabled.
fatal: Cannot prompt because user interactivity has been disabled.
fatal: could not read Password for ‘
https://workinghardinit@dev.azure.com/workinghardinit/InfraAsCode/_git/AzureFwChildPolFleetMgnt’: terminal prompts disabled
fatal: clone of
https://workinghardinit@dev.azure.com/workinghardinit/InfraAsCode/_git/AzureFwChildPolFleetMgnt’
into submodule path ‘D:/a/1/s/IAC/Up/Shared/bicep/AzureFwChildPolFleetMgnt’ failed
Failed to clone ‘IAC/Up/Shared/bicep/AzureFwChildPolFleetMgnt’. Retry scheduled

The checkout stage fails with an error when cloning the submodules

This one had me scratching my head. I mean, my repo and the repositories of the submodules are all in the same Azure DevOps project. That means there should not be an issue of permissions. If the submodules Azure Repos Git repositories are one or more different projects than your pipeline, and “Limit job authorization scope to current project for non-release pipelines” is set to On (the default) for your YAML, you must grant permission to the build service identity for your pipeline to the second project. See Understand job access tokens – Azure Pipelines | Microsoft Learn. But we are OK here.

So what gives? After a lot of trial and error, it hit me. These are all private Azure DevOps repositories, and I read something about that somewhere. So, I waded through the docs again, found the nugget of information I needed, and only now started to understand. Read along in Pipeline options for Git repositories under the Checkout submodules section.

Checkout submodules

Yes, I added the submodules following examples I read in many git examples and documents, which works fine for public repositories without authentication.

Sure, I was authenticated and did not leave my Azure DevOps organization, not even my project. As I said, all the repositories are part of the same project.

But my URLs for the submodules in the file .gitmodules were pointing to the external ones. I don’t have access as authentication is not even requested, and the access fails with the above error in my pipeline. So, I edited my .gitmodule file and changed the URLs to relative paths, as the documentation states.

[submodule "IAC/Up/Shared/bicep/AzureFwChildPolMarShip"]

path = IAC/Up/Shared/bicep/AzureFwChildPolMarShip

url = ../AzureFwChildPolMarShip

#url = https://workinghardinit@dev.azure.com/workinghardinit/InfraAsCode/_git/AzureFwChildPolMarShip

[submodule "IAC/Up/Shared/bicep/AzureFwChildPolFleetMgnt"]

path = IAC/Up/Shared/bicep/AzureFwChildPolFleetMgnt

url = ../AzureFwChildPolFleetMgnt

#url = https://workinghardinit@dev.azure.com/workinghardinit/InfraAsCode/_git/AzureFwChildPolFleetMgnt

How can we avoid this? Well, instead of adding a submodule using the full URL path you find in Azure DevOps “Clone Repository” button like below …

git submodule add https://workinghardinit@dev.azure.com/workinghardinit/InfraAsCode/_git/AzureFwChildPolMarShip .\IAC\Up\Shared\bicep\AzureFwChildPolMarShip

use the relative path

git submodule add ../AzureFwChildPolMarShip .\IAC\Up\Shared\bicep\AzureFwChildPolMarShip

Doing so means you get it right when working with private repositories the first time. After I did that, the checkout stage in my pipeline started working. Hurrah!

BONUS TIP. What if the submodule’s remote repository lives in the same Azure DevOps organization but in a different project? With the full URL path, you’d have the same issue. Well, you can still use a relative path, but you need to specify more of it. Instead of just using ../SubModuleRepoInSameProject, use ../../../OtherProject/_git/SubModuleInOtherProject for the url.

See the example and the image below.

git submodule add ../../../ProjectTwo/_git/SubRepoProjectTwo .\MySubModules\SubRepoProjectTwo

What if the submodule’s remote repository lives in the same Azure DevOps

Problem 2 – The checkout stage fails with an error when cloning the submodules

So, we solved problem one, only to reveal problem two. The checkout kept failing but with a different error. Cloning the submodules fails, stating it does not exist or lacks proper permissions,

TF401019: The Git repository with name or identifier AzureFwChildPolMarShip does not exist or you do not have permissions for the operation you are attempting.

Below is what it looks like in the pipeline.

The checkout stage fails with an error when cloning the submodules

When googling this error, I found that I could solve this problem by changing the project settings for our pipelines.

  • Navigate to Project Setting
  • Under Pipelines, go to Settings
  • Set Protect access to repositories in YAML pipelines to Off

Under Pipelines | Go to Settings Set Protect access to repositories in YAML pipelines to Off

When I reran the pipeline, the checkout succeeded. Cool, but I do not want to do this! I like that “Set Protect access to repositories in YAML pipelines” is enabled. So how do I achieve that?

Well, you need to do a couple of things.

Use a repository resource to reference an additional repository in your pipeline

The explanation is in resources.repositories.repository definition | Microsoft Learn, but it took me a while to find and understand it all. The repository keyword lets you specify an external repository. Use a repository resource to reference an additional repository in your pipeline.

resources:

repositories:

- repository: AzureFwChildPolFleetMgntID   #Create an ID to reference this resource in uses

type: Git

name: AzureFwChildPolFleetMgnt           #The name of the repository for my submodule

- repository: AzureFwChildPolMarShipID     #Create an ID to reference this resource in uses

type: Git

name: AzureFwChildPolMarShip             #The name of the repository for my submodule

Add a uses statement to your checkout stage

In Build Azure Repos Git repositories – Azure Pipelines | Microsoft Learn, you read another part of the solution. Your pipeline code must have a reference to the submodule repositories as we defined them in resources.repositories.repository in stages.stage.jobs.job.uses.repositories.

stages:

- stage: checkout

jobs:

- job:

steps:

- checkout: self

submodules: true

- task: PublishBuildArtifacts@1

inputs:

PathtoPublish: '$(Build.Repository.LocalPath)'

ArtifactName: 'iac'

publishLocation: 'Container'

uses:

repositories:

- AzureFwChildPolFleetMgntID

- AzureFwChildPolMarShipID

The latter is critical. With the “Protect access to repositories in YAML pipelines” enabled, we must explicitly reference the Azure Repo Git repositories of those submodules. That way, we can clone the code we want to use in the pipeline as a checkout step for the job that uses the repository. Git commands such as cloning fail if we do not do this.

Note what Microsoft has to say about this.

Protect access to repositories in YAML pipelines is enabled by default for new organizations and projects created after May 2020. When Protect access to repositories in YAML pipelines is enabled, your YAML pipelines must explicitly reference any Azure Repos Git repositories you want to use in the pipeline as a checkout step in the job that uses the repository. You won’t be able to fetch code using scripting tasks and git commands for an Azure Repos Git repository unless that repo is first explicitly referenced.

Note the conditions when you don’t need to do this and realize that we do need it here:

If your pipeline does not have an explicit checkout step, it behaves as if it has a “checkout: self” step and the self repository has checked out.

Suppose you are using a script to perform read-only operations on a repository in a public project. In that case, you don’t need to reference the public project repository during checkout.

If you use a script that provides authentication to the repo, such as a PAT, you don’t need to reference that repository in a checkout step.

Also note that instead of a uses statement, we could check out the Git repositories of the submodules, but I chose this route. Go through motions of Git add ., git commit -m “fixed references to submodule repositories in pipeline” and git push. Once we have done that, we set back “Protect access to repositories in YAML pipelines” to On to enable it. Protect access to repositories in YAML pipelines

Now, rerun the pipeline. Watch the output window, as you need to permit access to both submodule repositories.

Watch the output window, as you need to permit access to both submodule repositories.

Watch the output window, as you need to permit access to both submodule repositories.

Once you permit access, the pipeline run can be completed successfully with your security settings intact.

Once you permit access, the pipeline run can be completed successfully with your security settings intact.

That’s it! That concludes our demo on getting Git submodules in private Azure DevOps repositories to work in a pipeline.

Overview

To recapitulate, these are the steps we needed to make things work:

  1. The submodules and their Git repos exist in the same project as the Git repo to which you added the submodules. If these are in different projects, we need to take care of security settings in DevOps to make this work.
  2. We add the submodules as a URL relative to the main repository.
  3. Your pipeline code must have the submodules key set to true under stages: – stage: checkout jobs: – job: steps: – checkout: self.
  4. Your pipeline must reference the submodule repositories in resources.repositories.repository.
  5. You need to leverage the use statement and refer to the submodule repositories you defined in step 4.

To recapitulate, these are the steps we needed to make things work

BONUS TIP. Remember that a submodule’s remote repository lives in the same Azure DevOps organization but in a different project? We used a relative path like this while adding the submodule ../../../OtherProject/_git/SubModuleInOtherProject for the url. Well in your pipeline reference the name with the Project/RepoName and you’ll be fine.

See the example in the image below.

See the example in the image below

In your main repository’s project setting, you must set the option “Limit job authorization scope to the current project for non-release pipelines” to Off to avoid permission issues!

Project setting | Limit job authorization scope to the current project for non-release pipelines

Conclusion

This article has become my public documentation on getting the checkout stage in my Azure DevOps pipeline to work for a private repo with submodules pointing to other private repos. In this case, all repositories are part of the same project. Figuring out how to clone a submodule in an Azure DevOps pipeline took me an extremely long day.

So, to never forget and to find it again if I ever forget it, I wrote it all down in this article and shared it with you.

What can I say? Writing documentation about Git and pipelines is hard. Reading and understanding that documentation is also challenging. Finding, combining., linking, and understanding various pieces of information everywhere is even more difficult. It requires time and persistence combined with trial and error. That sucks until you find the solution. Then, you are delighted. Don’t linger on the effort, and document your findings! Maybe, one day, I can help a reader of this article save time and effort. And yes, that reader could be me.



from StarWind Blog https://ift.tt/xXWcL4g
via IFTTT

How to Conduct Advanced Static Analysis in a Malware Sandbox

Sandboxes are synonymous with dynamic malware analysis. They help to execute malicious files in a safe virtual environment and observe their behavior. However, they also offer plenty of value in terms of static analysis. See these five scenarios where a sandbox can prove to be a useful tool in your investigations.

Detecting Threats in PDFs

PDF files are frequently exploited by threat actors to deliver payloads. Static analysis in a sandbox makes it possible to expose any threat a malicious PDF contains by extracting its structure.

The presence of JavaScript or Bash scripts can reveal a possible mechanism for downloading and executing malware.

Sandboxes like ANY.RUN also allows users to scrutinize URLs found in PDFs to identify suspicious domains, potential command and control (C2) servers, or other indicators of compromise.

Example:

Malware Sandbox
Static analysis of a PDF file in ANY.RUN

Interactivity allows our users to manipulate files within a VM as they wish, but static Discovery offers even more opportunities.

As part of this analysis session, the static module lists several URLs that can be found inside the PDF. To investigate them, we can submit each of these for further sandbox analysis by simply clicking a corresponding button.

See how static and dynamic analysis in the ANY.RUN sandbox can benefit your security team.

Book a personal demo of the service today!

Exposing LNK Abuse

LNK files are shortcuts that direct to an executable file, a document, or a folder. A sandbox can provide a transparent view of the LNK file's properties, such as its target path, icon location, and any embedded commands or scripts.

Viewing commands in LNK files can reveal attempts to launch malicious software or connect to remote servers.

Static analysis in a sandbox is particularly useful in identifying threats that do not spawn a new process. These can be difficult to detect through dynamic analysis alone.

Example:

Malware Sandbox
The command line arguments shown in the static module reveal malicious activity

Examining the contents of LNK files can help you detect attacks before they begin.

In this sandbox session, we can discover every detail about the LNK file, including its command line arguments which show that the file is configured to download and execute a payload from a malicious URL.

Investigating Spam and Phishing Emails

Email remains one of the most common vectors for malware distribution. A sandbox lets you upload an email file to the service and analyze it safely to spot spam and hidden malicious elements faster and without any risk to your infrastructure.

A sandbox shows an email preview and lists metadata and Indicators of Compromise (IOCs). You can examine the content of the email without opening it and study the metadata that provides information about the email's origin, timestamps, and other relevant details.

The ANY.RUN sandbox also integrates RSPAMD, an open-source module that assigns a phishing score to each analyzed email and displays all of its elements using these features:

  • Header Analysis: Examines email headers for sender authenticity and anomalies.
  • Reputation Checks: Identifies known spam/malware sources using DNSBLs and URIBLs.
  • Bayesian Filtering: Classifies emails based on probabilistic analysis.

In ANY.RUN, you can move beyond static analysis and interact with the email directly like you would on your own computer. This means you can download and open attachments, including password-protected ones, or follow through the entire phishing attack, starting from the initial link.

Example:

Malware Sandbox
Details of an .eml file static analysis

All content within EMAIL files is extracted and made available through static analysis in the sandbox, allowing users to view details about it even without accessing the VM itself.

In this analysis session, we can observe a .RAR attachment which accompanies the email. Given that one of the files located inside of this archive is an executable named "Commercial Invoice PDF", we can instantly assume its malicious nature.

To analyze the executable, we can simply click the "Submit to analyze" button and launch a new sandbox session.

Analyzing Suspicious Office Documents

Microsoft Office documents, such as Word, Excel, and PowerPoint ones, are one of the leading security risks in both corporate and personal settings. Sandbox static analysis can be employed to scrutinize various elements of such documents without opening them. These include:

  • Content: Sandbox static analysis enables you to examine the document's content for signs of social engineering tactics, phishing attempts, or suspicious links.
  • Macros: Attackers often exploit Visual Basic for Applications (VBA) code in Office documents to automate malicious tasks. These tasks can range from downloading and executing malware to stealing sensitive data. ANY.RUN shows the entire execution chain of the script, enabling you to study it step by step.
  • Images and QR Codes: Steganography techniques let attackers conceal code within images. Sandbox static analysis is capable of extracting this hidden data. QR codes embedded within documents may also contain malicious links. A sandbox can decode these and expose the potential threats.
  • Metadata: Information about the document's creation, modification, author, etc. can help you understand the document's origin.

Example:

Malware Sandbox
The sandbox can show a preview of Office files

Microsoft Office files come in various formats, and analyzing their internal structure can sometimes be challenging. Static Discovery for Office files allows you to examine macros without needing additional tools.

All embedded files, including images, scripts, and executable files, are also accessible for further analysis. QR codes are detected during static analysis, and users can submit a new task that opens the content encoded in these codes, such as URLs.

In this session, static analysis makes it possible to see that the analyzed .pptx file contains a .zip archive.

Looking Inside Malicious Archives

Archives like ZIP, tar.gz, .bz2, and RAR are frequently used as means to bypass basic detection methods. A sandbox environment provides a safe and isolated space to analyze these files.

For instance, sandboxes can unpack archives to reveal their contents, including executable files, scripts, and other potentially malicious components. These files can then be analyzed using the built-in static module to expose their threats.

Example:

Malware Sandbox
ZIP file structure displayed in the static analysis window

In ANY.RUN, users can submit files for new analysis directly from archived files from the static discovery window. This eliminates the need to download or manually unpack them inside a VM.

In this analysis session, we once again see an archive with files that can be studied one by one to determine whether any additional analysis is required.

Conduct Static and Dynamic Analysis in ANY.RUN

ANY.RUN is a cloud-based sandbox with advanced static and dynamic analysis capabilities. The service lets you scan suspicious files and links and get the first results on their threat level in under 40 seconds. It gives you a real-time overview of the network traffic, registry activities, and processes occurring during malware execution, highlighting malicious behavior and the tactics, techniques, and procedures (TTPs).

ANY.RUN provides you with complete control over the VM, making it possible to interact with the virtual environment just like on a standard computer. The sandbox generates comprehensive reports that feature key threat information, including indicators of compromise (IOCs).

Start using ANY.RUN today for free and enjoy unlimited malware analysis in Windows and Linux VMs.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/v8yI6GJ
via IFTTT