Blog

  • CVE-2021-26084 PoC write-up

    5.00 of 38 votes

    This is my PoC write-up for CVE-2021-26084, which amounts to RCE and affects certain versions of Confluence Server and Data Center instances. I've come across this vulnerability a few times during the course of my research, so decided to add a brief PoC write-up for ease of future reference. Summary CVE-2021-26084 is an OGNL injection vulnerability allowing an unauthenticated attacker to execute arbitrary code on the targeted instance. It may be worth noting that statements from the vendor indicate this vulnerability is being actively exploited in the wild and that affected servers should be patched imediately. Steps to Reproduce I have included a downloadable PoC (proof-of-concept) Python script below, which enables owners of vulnerable instances to safely (and remotely) reproduce the necessary steps to validate this vulnerability themselves. Download Python Script Example Usage: python3 poc.py -u https://[TARGET HOST HERE] -p /pages/[PAGE VARIABLE HERE].action?SpaceKey=x The Vulnerability Atlassian Confluence is a widely used platform written in Java for managing project documentation and planning, typically deployed in corporate environments for teams to collaborate in shared workspaces. Back in 2017, security researcher Benny Jacob discovered that unauthenticated users could execute arbitrary code by targeting HTML queries with ONGL injection techniques. HTTP is a request/response protocol described in RFCs 7230 - 7237 and other RFCs. A request is sent by a client to a server, which in turn sends a response back to the client. A HTTP request consists of a request line, various headers, an empty line, and an optional message body: Where CRLF represents the new line sequence Carriage Return (CR) followed by Line Feed (LF), SP represents a space character. Parameters can be passed from the client to the server as name-value pairs in either the Request-URI or in the message-body, depending on the Method used and the Content-Type header. For example, a simple HTTP request passing a parameter named "param" with value "1", using the GET method might look like: A corresponding HTTP request using the POST method might look as follows: Confluence uses the Webwork web application framework to map URLs to Java classes, creating what is known as an action. Action URLs end with the '.action' suffix and are defined in the xwork.xml file in confluence- .jar and in the atlassian-plugin.xml file in JAR files of included plugins. Each action entry contains at least a name attribute, defining the action name, a class attribute, defining the Java class implementing the action, and at least one result element which decides the Velocity template to render after the action is invoked based on the result of the action. Common return values from actions are "error", "input";, and "success", but any value may be used if there is a matching result element in the associated XWork XML. Action entries can contain a method attribute, which allows invocation of a specific method of the specified Java class. When no command is specified, the doDefault() method of the action class is called. The following is a sample action entry for the doenterpagevariables action: In the above example, the doEnter() method of the com.atlassian.confluence.pages.actions.PageVariablesAction class handles requests to doenterpagevariables.action and will return values such as "success", "input";, or "error". This results in the appropriate Velocity template being rendered. Confluence supports the use of Object Graph Navigational Language (OGNL) expressions to dynamically generate web page content from Velocity templates using the Webwork library. OGNL is a dynamic Expression Language (EL) with terse syntax for getting and setting properties of Java objects, list projections, lambda expressions, etc. OGNL expressions contain strings combined to form a navigation chain. The strings can be property names, method calls, array indices, and so on. OGNL expressions are evaluated against the initial, or root context object supplied to the evaluator in the form of OGNL Context. Confluence uses a container object of class com.opensymphony.webwork.views.jsp.ui.template.TemplateRenderingContext to store objects needed to execute an Action. These objects include session identifiers, request parameters, spaceKey, etc. TemplateRenderingContext also contains a com.opensymphony.xwork.util.OgnlValueStack object to push and store objects against which dynamic Expression Languages (EL) are evaluated. When the EL compiler needs to resolve an expression, it searches down the stack starting with the latest object pushed into it. OGNL is the EL used by the Webwork library to render Velocity templates defined in Confluence, allowing access to Confluence objects exposed via the current context. For example, the $action variable returns the current Webwork action object. OGNL expressions in Velocity templates are parsed using the ognl.OgnlParser.expression() method. The expression is parsed into a series of tokens based on the input string. The ognl.JavaCharStream.readChar() method, called by the OGNL parser, evaluates Unicode escape characters in the form of uXXXX where "XXXX" is the hexadecimal code of the Unicode character represented. Therefore, if an expression includes the character u0027, the character is evaluated as a closing quote character ('), escaping the context of evaluation as a string literal, allowing to append an arbitrary OGNL expression. If an OGNL expression is parsed in a Velocity template within single quotes and the expression's value is obtained from user input without any sanitization, an arbitrary OGNL expression can be injected. An OGNL injection vulnerability exists in Atlassian Confluence. The vulnerability is due to insufficient validation of user input used to set variables evaluated in Velocity templates within single quotes. By including the u0027 character in user input, an attacker can escape the string literal and append an arbitrary OGNL expression. Before OGNL expressions are evaluated by Webwork, they are compared against a list of unsafe node types, property names, method names, and variables names in the com.opensymphony.webwork.util.SafeExpressionUtil.containsUnsafeExpression() method. However, this list is not exhaustive, and arbitrary Java objects can be instantiated without using any of the unsafe elements listed. For example, the following expression, executing an OS command, would be accepted as a safe expression by this method: A remote attacker can exploit this vulnerability by sending a crafted HTTP request containing a malicious parameter to a vulnerable server. Successful exploitation can result in the execution of arbitrary code with the privileges of the server. Remote Detection of Generic Attacks To detect this attack, you should monitor all HTTP traffic requests where the path component of the request-URI contains one of the strings in the "URI path" column of the following table: URI Path Vulnerable Parameters --------------------------------------------------------------------- --------------------------------------------------------------------- /users/darkfeatures.action featureKey /users/enabledarkfeature.action featureKey /users/disabledarkfeature.action featureKey /login.action token /dologin.action token /signup.action token /dosignup.action token /pages/createpage-entervariables.action queryString, linkCreation /pages/doenterpagevariables.action queryString /pages/createpage.action queryString /pages/createpage-choosetemplate.action queryString /pages/docreatepagefromtemplate.action queryString /pages/docreatepage.action queryString /pages/createblogpost.action queryString /pages/docreateblogpost.action queryString /pages/copypage.action queryString /pages/docopypage.action queryString /plugins/editor-loader/editor.action syncRev If such a request is found, you should inspect the HTTP request method. If the request method is POST, look for the respective vulnerable parameters from the table above in the body of the HTTP request, and if the request method is GET, you should look for the parameters in the request-URI of the HTTP request. Check to see if the value of any of the vulnerable parameters contains the string u0027 or its URL-encoded form. If so, the traffic should be considered malicious and an attack exploiting this vulnerability is likely underway. Remediation If you are running an affected version upgrade to version 7.13.0 (LTS) or higher. If you are running 6.13.x versions and cannot upgrade to 7.13.0 (LTS) then upgrade to version 6.13.23. If you are running 7.4.x versions and cannot upgrade to 7.13.0 (LTS) then upgrade to version 7.4.11. If you are running 7.11.x versions and cannot upgrade to 7.13.0 (LTS) then upgrade to version 7.11.6. If you are running 7.12.x versions and cannot upgrade to 7.13.0 (LTS) then upgrade to version 7.12.5. If you are unable to upgrade Confluence immediately, then as a temporary workaround, you can mitigate the issue by following the steps outlined in the official advisory: Confluence Server or Data Center Node running on Linux: If you run Confluence in a cluster, you will need to repeat this process on each node. You don't need to shut down the whole cluster. Shut down Confluence. Download the cve-2021-26084-update.sh to the Confluence Linux Server. Edit the cve-2021-26084-update.sh file and set INSTALLATION_DIRECTORY to your Confluence installation directory, for example: INSTALLATION_DIRECTORY=/opt/atlassian/confluence Save the file. Give the script execute permission. chmod 700 cve-2021-26084-update.sh Change to the Linux user that owns the files in the Confluence installation directory, for example: $ ls -l /opt/atlassian/confluence | grep bindrwxr-xr-x 3 root root 4096 Aug 18 17:07 bin# In this first example, we change to the 'root' user# to run the workaround script$ sudo su root $ ls -l /opt/atlassian/confluence | grep bindrwxr-xr-x 3 confluence confluence 4096 Aug 18 17:07 bin# In this second example, we need to change to the 'confluence' user# to run the workaround script$ sudo su confluence Run the workaround script. ./cve-2021-26084-update.sh The expected output should confirm up to five files updated and end with Update completed! The number of files updated will differ, depending on your Confluence version. Restart Confluence. Confluence Server or Data Center Node running on Windows: If you run Confluence in a cluster, you will need to repeat this process on each node. You don't need to shut down the whole cluster. Shut down Confluence. Download the cve-2021-26084-update.ps1 to the Confluence Windows Server. Edit the cve-2021-26084-update.ps1 file and set the INSTALLATION_DIRECTORY. Replace Set_Your_Confluence_Install_Dir_Here with your Confluence installation directory, for example:$INSTALLATION_DIRECTORY='C:Program FilesAtlassianConfluence' Save the file. Open up a Windows PowerShell (use Run As Administrator). Due to PowerShell's default restrictive execution policy, run the PowerShell using this exact command:Get-Content .cve-2021-26084-update.ps1 | powershell.exe -noprofile - The expected output should show the status of up to five files updated, encounter no errors (errors will usually show in red) and end with:Update completed! The number of files updated will differ, depending on your Confluence version. Restart Confluence. Remember, if you run Confluence in a cluster, make sure you run this script on all of your nodes.

  • I hacked the Dutch Tax Administration and got a trophy

    • General
    • by Jacob Riggs
    • 12-02-2022
    5.00 of 42 votes

    The Dutch Tax Administration (Belastingdienst) sent me a trophy and formal letter of appreciation on behalf of the Dutch government. The front engraving on the trophy playfully reads, “I hacked the Dutch Tax Administration and never got a refund”. Together with the trophy was a formal letter of appreciation.On behalf of the Dutch Tax and Customs Administration, we would like to thank Jacob for participating in our Coordinated Vulnerability Disclosure program. For that, we present this letter of appreciation to Jacob.At the Dutch Tax and Customs Administration, we consider the security of our systems a top priority. But no matter how much effort we put into system security, there can still be vulnerabilities present. For that, we warmly welcome people like Jacob as a particpiant in the Coordinated Vulnerability Disclosure program.We appreciate not only that you have reported a security issue to us, but also that you have professionally done this. I would like to extend my thanks to the Belastingdienst Security Operations Center for sending me such a kind gift.

  • How to integrate Nuclei with Interactsh and Notify

    4.96 of 45 votes

    This is my walkthrough for installing Nuclei from ProjectDiscovery and how to integrate a suite of tools to assist with automation tasks. This includes subfinder for subdomain discovery, httpx for probing to validate live hosts, setting up your own self-hosted Interactsh server for OOB (out-of-band) testing, and how to install and configure Notify for the convenience of alerting on any identified vulnerabilities via external channels such as email, Slack, Discord, and Telegram. Throughout this guide I'll be using a standard Kali image, though you can of course use whichever flavour of Linux you prefer.   Prerequisites Install Golang sudo apt install golang Update Golang git clone https://github.com/udhos/update-golangcd update-golangsudo ./update-golang.sh Set the golang environment module to auto go env -w GO111MODULE=auto Install go wget https://go.dev/dl/go1.17.5.linux-amd64.tar.gzsudo tar -C /usr/local/ -xzf go1.17.5.linux-amd64.tar.gz Set the path variable within the bashrc file echo 'export GOROOT=/usr/local/go' >> .bashrcecho 'export GOPATH=$HOME/go' >> .bashrcecho 'export PATH=$GOPATH/bin:$GOROOT/bin:$HOME/.local/bin:$PATH' >> .bashrc Refresh the bashrc file source ~/.bashrc Install the following suite of tools (these are my baseline for optimal automation). Interactsh We need to set up a self-hosted Interactsh server to enable secure testing of OOB (out-of-band) vulnerabilities. To get started we need our own remote VPS. For this guide I’ll be using a Digital Ocean droplet. Install Interactsh go install -v github.com/projectdiscovery/interactsh/cmd/interactsh-client@latest First, create a Digital Ocean account if you don’t have one already. Then in the top-right of your account dashboard, click to create a droplet. Choose a droplet image (for this example I've opted to use Debian). Select a droplet plan (for this example I’ve opted to go with basic). Select a CPU (for this example I've opted for a single CPU). Select the region where your desired VPS will reside (for this example I’ve opted for London). Select an authentication protocol for droplet access (for this example I’ve selected a root password). Now we can create our droplet. Once the droplet is created, we should take note of the public IP address, as we’ll need this later. Now we can click the three dots to access the droplet console. Then proceed to access the console via our web browser and login as root, using the password we defined earlier (in step 7). Install Interactsh on remote VPS droplet sudo apt updatesudo apt install golang go env -w GO111MODULE=auto wget https://go.dev/dl/go1.17.5.linux-amd64.tar.gzsudo tar -C /usr/local/ -xzf go1.17.5.linux-amd64.tar.gz echo 'export GOROOT=/usr/local/go' >> .bashrcecho 'export GOPATH=$HOME/go' >> .bashrcecho 'export PATH=$GOPATH/bin:$GOROOT/bin:$HOME/.local/bin:$PATH' >> .bashrc source ~/.bashrc Register and configure the desired domain Next we need to register our desired domain name with our preferred domain registrar. In this guide I’ll be demonstrating the process using GoDaddy with my domain I selected for my OOB testing: oobtest.com First we need to login to GoDaddy, navigate to our account home page, and then scroll down to our domains and click Set up to configure the desired domain DNS settings. Now we click on Add and navigate to Host Names. Now we add our desired hostnames, which in this case will be directing NS1 and NS2 to the public IP address of our droplet. Now we can navigate back to our DNS Management page and scroll down to Nameservers. As these are configured to the default GoDaddy nameservers, we want to change them. Now we can enter our custom nameservers for our domain. Once the nameserver configuration is saved, we will need to wait for DNS propagation to complete in the background, which usually takes a few hours. Once this is done, our OOB testing server is ready to deploy. Afterwards, we can login to our droplet, and start the server. interactsh-server -domain oobtest.com At this stage the server is live and listening for any OOB interactions. To configure the server for secure communication with the client, we can leverage the -auth flag to generate an authentication token, which will remain valid for the duration of the session. interactsh-server -domain oobtest.com -auth Nuclei Install Nuclei go install -v github.com/projectdiscovery/nuclei/v2/cmd/nuclei@latest Run nuclei for the first time to install the templates. Subfinder Install Subfinder go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest Now we need to add API keys to the Subfinder configuration file. Run subfinder for the first time to generate the default config file. Now we can open and edit the configuration file. sudo nano ~/.config/subfinder/config.yaml Scroll to the bottom of the file and add your API keys for the corresponding third-party service providers. If you do not yet have API keys for the named service providers, you will need to register an account at each one and copy over the API key they issue you. Once completed, your configuration file should look something like this: Save and exit. httpx Install httpx go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest Notify In this example I’ll be demonstrating how to set up Notify for Discord. These steps are similar for the alternative channels which are currently supported such as Slack and Telegram. Install Notify go install -v github.com/projectdiscovery/notify/cmd/notify@latest Set up the Notify configuration file sudo mkdir ~/.config/notifycd ~/.config/notify Create a Discord server Choose a server name Edit channel within server Change channel name to notifications Navigate to Integrations and create a web hook for the channel Enter the desired bot name and copy the web hook URL to clipboard Now open and edit the configuration file sudo nano provider-config.yaml Enter the following: discord:     - id: "Discord Server for Nuclei Alerts"       discord_channel: "notifications"       discord_username: "Notify Bot"       discord_format: "{{data}}"       discord_webhook_url: "[PASTE WEBHOOK URL HERE]" Save and exit. Ready to go Now we can begin using the suite of tools together. First we want to initialise our Interactsh client. interactsh-client -v -server https://oobtest.com -token [INTERACTSH SERVER TOKEN] The client communicates with the server to generate a unique subdomain ‘payload’ for OOB testing. This will listen for OOB interactions, and with the -v flag present will provide verbose logging within the terminal window for any requests it receives. With the Interactsh client generated subdomain, and the Interactsh server generated auth token, we can feed these values into our Nuclei command to include OOB testing in our scans. For example: subfinder -d jacobriggs.io -o subdomains.txt | httpx -list subdomains.txt | nuclei -stats -l subdomains.txt -t takeovers/ -t cves/ -iserver "https://c6u975tmtn7kpis0l360cg6j8fayyyyyn.oobtest.com" -itoken "ac3dc1c75be69feb5a6a2d4bf126cff3b1a4fb4f8cf2f28fb8a827745302ceaf" -o vulns.txt; notify -data vulns.txt -bulk I will briefly break down the arguments in this command below: subfinder will enumerate the subdomains of the designated target domain and write all the subdomains it finds to a subdomains.txt file. httpx will then probe the subdomains to validate which ones are alive. Nuclei will then ingest the alive subdomains, run each subdomain URL through the defined template scans (in this case to check for subdomain takeovers and any vulnerabilities corresponding to known CVEs), and then output any logged findings to a vulns.txt file. With the self-hosted Interactsh instance specified, Nuclei will auto-correlate the requests it makes with any OOB interactions it identifies on the listening Interactsh server. Once the Nuclei scanning is complete, Notify will then parse the vulns.txt file and send a bulk alert over the webhook to the corresponding notification channel (in this case alerts will arrive in the Discord channel we set up earlier).

  • Source code disclosure via exposed .git

    5.00 of 53 votes

    This is my write-up on a misconfigured .git repo I found during my day off and how the potential exploitation of this vulnerability can amount to source-code disclosure. During my day off I took a brief look at a particular vendor my employer was in the process of procuring a new service from. I quickly identified what appeared to be an exposed .git repo which I was able to provisionally validate over HTTP. Whilst I wasn't able to view the .git folder itself because public read access is disabled on the server, I was able to confirm the repository contents were accessible. Example #1:https://[TARGET]/.git/config Example #2:https://[TARGET]/.git/logs/HEAD Here I will walkthrough how we can extract the contents of a repository like this to help identify the impact of a vulnerability such as source code disclosure with a clear PoC. To dump this repository locally for analysis and to help quantify the number of objects within it, we can use the dumper tool from GitTools. git clone https://github.com/internetwache/GitTools.git cd GitTools/Dumper ./gitdumper.sh https://[TARGET]/.git/ ~/target To view a summary of the file tree: cd target tree -a .git/objects The files within the repository are identified by their corresponding hash values, though the full hash string for these actually includes the first two characters of each corresponding subfolder within the tree. This means we need to add these to the file name to complete the 40 character hash string. We can pull these 40 character hashes by concatenating the subfolder name with the filename by using find and then piping the results into the format we want using awk find .git/objects -type f | awk -F/ '{print $3$4}' We can use a for loop to identify the type of all files within the objects directory: for i in $(find .git/objects -type f | awk -F/ '{print $3$4}'); git cat-file -t $i Here we can see these objects consist of a number of trees, commits, and blobs (binary large objects). In this example, I actually have a calculated total of: 1,276 trees 910 commits 923 blobs By default, git stores objects in .git/objects as their original contents, which are compressed using the zlib library. This means we cannot view objects within a text-editor as-is, and must instead rely on alternative options such as git cat-file We can check the type for each of these files individually using their identified hashes: git cat-file -t [FULL FILE HASH] We can preview the contents of each of these files individually using their identified hashes: git cat-file -p [FULL FILE HASH] | head In my last example, we can see this particular blob contains PHP code. As PHP is a server-side scripting language (almost like a blueprint to backend functionality), this can be used to evidence that server-side source code is exposed within this repository. The impact of this can vary depending on the functionality purpose and volume of code, but in my experience often results in the exposure of backend configuration settings, hardcoded credentials (such as usernames and passwords), API tokens, and other endpoints. If this is a repository upon which any proprietary software components rely, then source code disclosure of this type can also present a number of issues relating to theft of intellectual property. This finding was duly reported to the affected vendor within 24 hours of being identified.

  • My limited edition print art collection

    5.00 of 65 votes

    Over the years I've collected a number of prints from a particular artist. This artist is David Ambarzumjan, an incredibly talented painter from Germany that aims to combine abstract and surrealistic elements to express his fascination for nature in all its diversity and unpredictability. His works can be found in private collections and exhibitions all around the world, some of which I've been lucky enough to acquire for decorative placements throughout my home. Here I've documented some of the limited edition prints I've acquired from his Brushstrokes in Time collection and any I'm still interested in buying. I'll endeavour to update this blog post whenever I'm successful in my efforts. THIS WAS WATER(1/100) ZEBRA CROSSING(15/50) SHARKS IN MONTMARTE(6/50) WATERSHED(6/100) STRAY(8/50) HUMAN NATUREOPEN EDITION RECOVEROPEN EDITION BREATHE**SEEKING** GEZEITENWELLE(97/100) If you have any of these limited edition prints which I've listed as **SEEKING** that you would be willing to sell, please feel free to reach out to me via my contact form with offers.

  • How to install a Wazuh SIEM server on a Raspberry Pi 4B

    5.00 of 66 votes

    With many security professionals now working remotely from home, some are looking at ways in which they can improve their home environment network and endpoint security. Commercial SIEM solutions are often considered too costly and complicated to deploy, but free lightweight open-source solutions offering minimal overhead such as Wazuh provide a good compromise for those of modest means aiming to mature the security of their home or small business environment.This is my walkthrough on how to install the Wazuh server manager onto a Raspberry Pi 4B as an all-in-one deployment. I noticed there was no clear guidance online on how to go about doing this for a Raspberry Pi 4B, only a significant number of online posts outlining the installation and deployment difficulties people have faced. So here I've decided to document the process I took, tested and validated from start to finish, with clear directions anyone can follow.By following this guide you can run your own open-source SIEM solution on a Raspberry Pi 4B at home. This is great not only for existing security professionals looking to improve the resiliency of their home setup, but also for those new to the information security industry seeking to gain hands-on experience in SIEM deployment, management, and other SIEM/SOC related activities. Getting started To get started, I used a Raspberry Pi 4B with 8GB RAM and a 128GB SanDisk Extreme SD card for storage. Whilst the Raspberry Pi 4B I used for this project was custom built from TurboPi with high-end hardware for this particular purpose, any Raspberry Pi 4B with adequate hardware capable of running Raspbian OS (buster or greater) leveraging the AArch64 64-bit extension of the ARM architecture should be sufficient. Walkthrough Install a Raspberry Pi 64-bit ARM OS First download and install the official Raspberry Pi Imager. Now download the latest Raspi OS ARM64 .zip image from the official repo (make sure it's the latest version). Open the Raspberry Pi Imager application. Select the CHOOSE OS button and in the dropdown list select the Use custom option. Select the Raspi OS ARM64 .zip file you just downloaded. Select the SD storage card to write this to. Proceed with the prompted erasure, formatting, and writing activities to install the OS to your SD card. The last step here is to write an empty text file named 'ssh' (no file extension) to the root of the directory of the SD card. When the device boots and sees the 'ssh' file present, it will automatically enable SSH, which will allow us to remotely access the device command line in a headless state. Identify your Raspberry Pi local IP address For this I use a Kali VM in VirtualBox, but any flavour of Linux distro can acheive the same. As the guest VM is not visible to the host under the default VirtualBox NAT settings, you need to change your VM network settings to bridge the network adapter between your host machine and the guest VM. Once the network adapter is bridged, we need to identify the Raspberry Pi IP address on the network. There are a few ways to do this (such as logging directly into the router), but we can also use an arp command with the -na flag to display the local network address translation tables and pipe the output using grep to pull any MAC addresses that begin with identifiers we're interested in.arp -na | grep -i dc:a6:32 Raspberry Pi MAC addresses always use the same OUI (Organizational Unique Identifier) in their MAC address (b8:27:eb for all Raspberry Pi devices except Raspberry Pi 4B which uses dc:a6:32). Connecting to the Raspberry Pi Now we have the device IP address (in this example mine is assigned 192.168.1.93), we can SSH into it using default credentials ssh pi@192.168.1.93 with the default password raspberry and get started. Change hostname and update First you may wish to change the hostname from raspberry to wazuh (or something else). To do this run the command sudo raspi-config and navigate to System Options > Hostname using the GUI.Type in your desired hostname and hit Enter, then return to the main menu of the GUI and select Update. Once the device has finished updating, navigate to the to Finish button to save your new raspi-config settings.For the hostname change to take effect, reboot the device using the command sudo reboot, then SSH back in using the same credentials ssh pi@192.168.1.93 with the default password raspberry once the reboot is complete. Enable login as root Then if you want to login as root using SSH or WinSCP you need to edit the config of SSHD. Login, and edit the sshd_config file using sudo nano /etc/ssh/sshd_configFind the line containing PermitRootLogin prohibit-passwordEdit this to reflect PermitRootLogin yes Close and save the file, then reboot or restart sshd service using /etc/init.d/ssh restartSet a root password if there isn't one already using sudo passwd rootNow you can login as root (I recommend using a strong password or SSH keys). Now proceed to sudo up using sudo su and continue the next steps as root. Update the Raspberry Pi packages To update the Raspberry Pi, first ensure the VM you're connecting from over SSH into has an Internet connection and then run the command apt update && apt upgrade -y Once the update and upgrade process is complete, pull down the required packages for the next steps using:apt-get install apt-transport-https zip unzip curl gnupg wget libcap2-bin software-properties-common lsb-release -y Install Java 11 echo 'deb http://deb.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/backports.list apt update -y apt install openjdk-11-jdk -y Install Elasticsearch OSS Fetch the Elasticsearch OSS arm64 installation package:wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.10.2-arm64.deb Install the Elasticsearch OSS package:dpkg -i elasticsearch-oss-7.10.2-arm64.deb Install Open Distro for Elasticsearch Download and add the signing keys for the repositories:wget -qO - https://d3g5vo6xdbdb9a.cloudfront.net/GPG-KEY-opendistroforelasticsearch | sudo apt-key add - wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - Add the repository definitions:echo "deb https://d3g5vo6xdbdb9a.cloudfront.net/apt stable main" | sudo tee -a /etc/apt/sources.list.d/opendistroforelasticsearch.list Update the packages:apt-get update -y Install the Open Distro for Elasticsearch package:apt install opendistroforelasticsearch -y Configure and run Elasticsearch Run the following command to download the configuration file /etc/elasticsearch/elasticsearch.yml:curl -so /etc/elasticsearch/elasticsearch.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/7.x/elasticsearch_all_in_one.yml Now we need to add users and roles in order to use the Wazuh Kibana properly. Run the following commands to add the Wazuh users and additional roles in Kibana:curl -so /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/roles/roles.ymlcurl -so /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles_mapping.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/roles/roles_mapping.ymlcurl -so /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/roles/internal_users.yml Remove the demo certificates:rm /etc/elasticsearch/esnode-key.pem /etc/elasticsearch/esnode.pem /etc/elasticsearch/kirk-key.pem /etc/elasticsearch/kirk.pem /etc/elasticsearch/root-ca.pem -f Download the wazuh-cert-tool.sh:curl -so ~/wazuh-cert-tool.sh https://packages.wazuh.com/resources/4.2/open-distro/tools/certificate-utility/wazuh-cert-tool.shcurl -so ~/instances.yml https://packages.wazuh.com/resources/4.2/open-distro/tools/certificate-utility/instances_aio.yml Run the wazuh-cert-tool.sh to generate the certificates:bash ~/wazuh-cert-tool.sh Move the Elasticsearch certificates to their corresponding location for deployment:mkdir /etc/elasticsearch/certs/mv ~/certs/elasticsearch* /etc/elasticsearch/certs/mv ~/certs/admin* /etc/elasticsearch/certs/cp ~/certs/root-ca* /etc/elasticsearch/certs/ Enable and start the Elasticsearch service:systemctl daemon-reloadsystemctl enable elasticsearchsystemctl start elasticsearch Run the Elasticsearch securityadmin script to load the new certificates information and start the cluster:export ES_JAVA_HOME=/usr/share/elasticsearch/jdk/ && /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -nhnv -cacert /etc/elasticsearch/certs/root-ca.pem -cert /etc/elasticsearch/certs/admin.pem -key /etc/elasticsearch/certs/admin-key.pem Run the following command to ensure the installation is successful:curl -XGET https://localhost:9200 -u admin:admin -k An example response should look as follows: The Open Distro for Elasticsearch performance analyzer plugin is installed by default and can have a negative impact on system resources. The official Wazuh documentation recommends removing this with the following command /usr/share/elasticsearch/bin/elasticsearch-plugin remove opendistro-performance-analyzer and restarting the Elasticsearch service afterwards. Install and run the Wazuh manager Install the GPG key:curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add - Add the repository definition:echo "deb https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list Update the Wazuh packages:apt-get update -y Install the Wazuh manager package:apt-get install wazuh-manager Enable and start the Wazuh manager service:systemctl daemon-reloadsystemctl enable wazuh-managersystemctl start wazuh-manager Run the following command to check if the Wazuh manager is active:systemctl status wazuh-manager An example response should look as follows: Install and configure Filebeat Add the repository definition:echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list Fetch the Filebeat arm64 installation package:wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-7.12.1-arm64.deb Install the Filebeat package:dpkg -i filebeat-oss-7.12.1-arm64.deb Download the pre-configured Filebeat config file used to forward Wazuh alerts to Elasticsearch:curl -so /etc/filebeat/filebeat.yml https://packages.wazuh.com/resources/4.2/open-distro/filebeat/7.x/filebeat_all_in_one.yml Download the alerts template for Elasticsearch:curl -so /etc/filebeat/wazuh-template.json https://raw.githubusercontent.com/wazuh/wazuh/4.2/extensions/elasticsearch/7.x/wazuh-template.jsonchmod go+r /etc/filebeat/wazuh-template.json Download the Wazuh module for Filebeat:curl -s https://packages.wazuh.com/4.x/filebeat/wazuh-filebeat-0.1.tar.gz | tar -xvz -C /usr/share/filebeat/module Copy the Elasticsearch certificates into /etc/filebeat/certs:mkdir /etc/filebeat/certscp ~/certs/root-ca.pem /etc/filebeat/certs/mv ~/certs/filebeat* /etc/filebeat/certs/ Enable and start the Filebeat service:systemctl daemon-reloadsystemctl enable filebeatsystemctl start filebeat To ensure that Filebeat has been successfully installed, run the following command:filebeat test output An example response should look as follows: Install and configure Kibana Install the Kibana package:apt-get install opendistroforelasticsearch-kibana Download the Kibana configuration file:curl -so /etc/kibana/kibana.yml https://packages.wazuh.com/resources/4.2/open-distro/kibana/7.x/kibana_all_in_one.yml Create the /usr/share/kibana/data directory:mkdir /usr/share/kibana/datachown -R kibana:kibana /usr/share/kibana Install the Wazuh Kibana plugin. The installation of the plugin must be done from the Kibana home directory as follows:cd /usr/share/kibanasudo -u kibana /usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/4.x/ui/kibana/wazuh_kibana-4.2.5_7.10.2-1.zip Copy the Elasticsearch certificates into the Kibana configuration folder:mkdir /etc/kibana/certscp ~/certs/root-ca.pem /etc/kibana/certs/mv ~/certs/kibana* /etc/kibana/certs/chown kibana:kibana /etc/kibana/certs/* Link Kibana’s socket to privileged port 443:setcap 'cap_net_bind_service=+ep' /usr/share/kibana/node/bin/node Enable and start the Kibana service:systemctl daemon-reloadsystemctl enable kibanasystemctl start kibana Check the Kibana service status to ensure it's running:systemctl status kibana An example response should look as follows: Open the Kibana interface Visit the Raspberry Pi 4B device IP address in your browser (e.g my interface is reachable at https://192.168.1.93). Upon the first access to Kibana, the browser shows a warning message stating that the certificate was not issued by a trusted authority. An exception can be added in the advanced options of the web browser or, for increased security, the root-ca.pem file previously generated can be imported to the certificate manager of the browser. Alternatively, a certificate from a trusted authority can be configured. Login to Kibana using the default user credentials admin with the password admin. For security purposes I recommend these credentials are changed. Change the default passwords To change the default credentials for all users residing in the internal_users.yml file, run the following command:bash wazuh-passwords-tool.sh -a An example response should look as follows: Remember to take note of these credentials or save them into a password manager if you have one. Next we need to also also update the credentials for Filebeat and Kibana (if these were not already covered by the wazuh-passwords-tool.sh script). Open and update the Filebeat configuration file:nano /etc/filebeat/filebeat.yml Change the associated password: value. Make sure you make a record of this, then save and exit. Open and update the Kibana configuration file:nano /etc/kibana/kibana.yml Change the associated elasticsearch.password: value. Make sure you make a record of this, then save and exit. Restart all services for the changes to take effect:systemctl restart wazuh-managersystemctl restart filebeatsystemctl restart kibana Congratulations, you've now installed the Wazuh server manager onto your Raspberry Pi. Now you can install the Wazuh agents on any devices you want to onboard to monitor security related events from within the server manager interface. The Wazuh agent installation guide is relatively simple and can be found here. I hope this tutorial helped.