Blog

  • I hacked the Dutch Tax Administration and got a trophy

    • General
    • by Jacob Riggs
    • 12-02-2022
    5.00 of 60 votes

    The Dutch Tax Administration (Belastingdienst) sent me a trophy and formal letter of appreciation on behalf of the Dutch government. The front engraving on the trophy playfully reads, “I hacked the Dutch Tax Administration and never got a refund”. Together with the trophy was a formal letter of appreciation.On behalf of the Dutch Tax and Customs Administration, we would like to thank Jacob for participating in our Coordinated Vulnerability Disclosure program. For that, we present this letter of appreciation to Jacob.At the Dutch Tax and Customs Administration, we consider the security of our systems a top priority. But no matter how much effort we put into system security, there can still be vulnerabilities present. For that, we warmly welcome people like Jacob as a particpiant in the Coordinated Vulnerability Disclosure program.We appreciate not only that you have reported a security issue to us, but also that you have professionally done this. I would like to extend my thanks to the Belastingdienst Security Operations Center for sending me such a kind gift.

  • How to integrate Nuclei with Interactsh and Notify

    4.97 of 65 votes

    This is my walkthrough for installing Nuclei from ProjectDiscovery and how to integrate a suite of tools to assist with automation tasks. This includes subfinder for subdomain discovery, httpx for probing to validate live hosts, setting up your own self-hosted Interactsh server for OOB (out-of-band) testing, and how to install and configure Notify for the convenience of alerting on any identified vulnerabilities via external channels such as email, Slack, Discord, and Telegram. Throughout this guide I'll be using a standard Kali image, though you can of course use whichever flavour of Linux you prefer.   Prerequisites Install Golang sudo apt install golang Update Golang git clone https://github.com/udhos/update-golangcd update-golangsudo ./update-golang.sh Set the golang environment module to auto go env -w GO111MODULE=auto Install go wget https://go.dev/dl/go1.17.5.linux-amd64.tar.gzsudo tar -C /usr/local/ -xzf go1.17.5.linux-amd64.tar.gz Set the path variable within the bashrc file echo 'export GOROOT=/usr/local/go' >> .bashrcecho 'export GOPATH=$HOME/go' >> .bashrcecho 'export PATH=$GOPATH/bin:$GOROOT/bin:$HOME/.local/bin:$PATH' >> .bashrc Refresh the bashrc file source ~/.bashrc Install the following suite of tools (these are my baseline for optimal automation). Interactsh We need to set up a self-hosted Interactsh server to enable secure testing of OOB (out-of-band) vulnerabilities. To get started we need our own remote VPS. For this guide I’ll be using a Digital Ocean droplet. Install Interactsh go install -v github.com/projectdiscovery/interactsh/cmd/interactsh-client@latest First, create a Digital Ocean account if you don’t have one already. Then in the top-right of your account dashboard, click to create a droplet. Choose a droplet image (for this example I've opted to use Debian). Select a droplet plan (for this example I’ve opted to go with basic). Select a CPU (for this example I've opted for a single CPU). Select the region where your desired VPS will reside (for this example I’ve opted for London). Select an authentication protocol for droplet access (for this example I’ve selected a root password). Now we can create our droplet. Once the droplet is created, we should take note of the public IP address, as we’ll need this later. Now we can click the three dots to access the droplet console. Then proceed to access the console via our web browser and login as root, using the password we defined earlier (in step 7). Install Interactsh on remote VPS droplet sudo apt updatesudo apt install golang go env -w GO111MODULE=auto wget https://go.dev/dl/go1.17.5.linux-amd64.tar.gzsudo tar -C /usr/local/ -xzf go1.17.5.linux-amd64.tar.gz echo 'export GOROOT=/usr/local/go' >> .bashrcecho 'export GOPATH=$HOME/go' >> .bashrcecho 'export PATH=$GOPATH/bin:$GOROOT/bin:$HOME/.local/bin:$PATH' >> .bashrc source ~/.bashrc Register and configure the desired domain Next we need to register our desired domain name with our preferred domain registrar. In this guide I’ll be demonstrating the process using GoDaddy with my domain I selected for my OOB testing: oobtest.com First we need to login to GoDaddy, navigate to our account home page, and then scroll down to our domains and click Set up to configure the desired domain DNS settings. Now we click on Add and navigate to Host Names. Now we add our desired hostnames, which in this case will be directing NS1 and NS2 to the public IP address of our droplet. Now we can navigate back to our DNS Management page and scroll down to Nameservers. As these are configured to the default GoDaddy nameservers, we want to change them. Now we can enter our custom nameservers for our domain. Once the nameserver configuration is saved, we will need to wait for DNS propagation to complete in the background, which usually takes a few hours. Once this is done, our OOB testing server is ready to deploy. Afterwards, we can login to our droplet, and start the server. interactsh-server -domain oobtest.com At this stage the server is live and listening for any OOB interactions. To configure the server for secure communication with the client, we can leverage the -auth flag to generate an authentication token, which will remain valid for the duration of the session. interactsh-server -domain oobtest.com -auth Nuclei Install Nuclei go install -v github.com/projectdiscovery/nuclei/v2/cmd/nuclei@latest Run nuclei for the first time to install the templates. Subfinder Install Subfinder go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest Now we need to add API keys to the Subfinder configuration file. Run subfinder for the first time to generate the default config file. Now we can open and edit the configuration file. sudo nano ~/.config/subfinder/config.yaml Scroll to the bottom of the file and add your API keys for the corresponding third-party service providers. If you do not yet have API keys for the named service providers, you will need to register an account at each one and copy over the API key they issue you. Once completed, your configuration file should look something like this: Save and exit. httpx Install httpx go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest Notify In this example I’ll be demonstrating how to set up Notify for Discord. These steps are similar for the alternative channels which are currently supported such as Slack and Telegram. Install Notify go install -v github.com/projectdiscovery/notify/cmd/notify@latest Set up the Notify configuration file sudo mkdir ~/.config/notifycd ~/.config/notify Create a Discord server Choose a server name Edit channel within server Change channel name to notifications Navigate to Integrations and create a web hook for the channel Enter the desired bot name and copy the web hook URL to clipboard Now open and edit the configuration file sudo nano provider-config.yaml Enter the following: discord:     - id: "Discord Server for Nuclei Alerts"       discord_channel: "notifications"       discord_username: "Notify Bot"       discord_format: "{{data}}"       discord_webhook_url: "[PASTE WEBHOOK URL HERE]" Save and exit. Ready to go Now we can begin using the suite of tools together. First we want to initialise our Interactsh client. interactsh-client -v -server https://oobtest.com -token [INTERACTSH SERVER TOKEN] The client communicates with the server to generate a unique subdomain ‘payload’ for OOB testing. This will listen for OOB interactions, and with the -v flag present will provide verbose logging within the terminal window for any requests it receives. With the Interactsh client generated subdomain, and the Interactsh server generated auth token, we can feed these values into our Nuclei command to include OOB testing in our scans. For example: subfinder -d jacobriggs.io -o subdomains.txt | httpx -list subdomains.txt | nuclei -stats -l subdomains.txt -t takeovers/ -t cves/ -iserver "https://c6u975tmtn7kpis0l360cg6j8fayyyyyn.oobtest.com" -itoken "ac3dc1c75be69feb5a6a2d4bf126cff3b1a4fb4f8cf2f28fb8a827745302ceaf" -o vulns.txt; notify -data vulns.txt -bulk I will briefly break down the arguments in this command below: subfinder will enumerate the subdomains of the designated target domain and write all the subdomains it finds to a subdomains.txt file. httpx will then probe the subdomains to validate which ones are alive. Nuclei will then ingest the alive subdomains, run each subdomain URL through the defined template scans (in this case to check for subdomain takeovers and any vulnerabilities corresponding to known CVEs), and then output any logged findings to a vulns.txt file. With the self-hosted Interactsh instance specified, Nuclei will auto-correlate the requests it makes with any OOB interactions it identifies on the listening Interactsh server. Once the Nuclei scanning is complete, Notify will then parse the vulns.txt file and send a bulk alert over the webhook to the corresponding notification channel (in this case alerts will arrive in the Discord channel we set up earlier).

  • Source code disclosure via exposed .git

    5.00 of 67 votes

    This is my write-up on a misconfigured .git repo I found during my day off and how the potential exploitation of this vulnerability can amount to source-code disclosure. During my day off I took a brief look at a particular vendor my employer was in the process of procuring a new service from. I quickly identified what appeared to be an exposed .git repo which I was able to provisionally validate over HTTP. Whilst I wasn't able to view the .git folder itself because public read access is disabled on the server, I was able to confirm the repository contents were accessible. Example #1:https://[TARGET]/.git/config Example #2:https://[TARGET]/.git/logs/HEAD Here I will walkthrough how we can extract the contents of a repository like this to help identify the impact of a vulnerability such as source code disclosure with a clear PoC. To dump this repository locally for analysis and to help quantify the number of objects within it, we can use the dumper tool from GitTools. git clone https://github.com/internetwache/GitTools.git cd GitTools/Dumper ./gitdumper.sh https://[TARGET]/.git/ ~/target To view a summary of the file tree: cd target tree -a .git/objects The files within the repository are identified by their corresponding hash values, though the full hash string for these actually includes the first two characters of each corresponding subfolder within the tree. This means we need to add these to the file name to complete the 40 character hash string. We can pull these 40 character hashes by concatenating the subfolder name with the filename by using find and then piping the results into the format we want using awk find .git/objects -type f | awk -F/ '{print $3$4}' We can use a for loop to identify the type of all files within the objects directory: for i in $(find .git/objects -type f | awk -F/ '{print $3$4}'); git cat-file -t $i Here we can see these objects consist of a number of trees, commits, and blobs (binary large objects). In this example, I actually have a calculated total of: 1,276 trees 910 commits 923 blobs By default, git stores objects in .git/objects as their original contents, which are compressed using the zlib library. This means we cannot view objects within a text-editor as-is, and must instead rely on alternative options such as git cat-file We can check the type for each of these files individually using their identified hashes: git cat-file -t [FULL FILE HASH] We can preview the contents of each of these files individually using their identified hashes: git cat-file -p [FULL FILE HASH] | head In my last example, we can see this particular blob contains PHP code. As PHP is a server-side scripting language (almost like a blueprint to backend functionality), this can be used to evidence that server-side source code is exposed within this repository. The impact of this can vary depending on the functionality purpose and volume of code, but in my experience often results in the exposure of backend configuration settings, hardcoded credentials (such as usernames and passwords), API tokens, and other endpoints. If this is a repository upon which any proprietary software components rely, then source code disclosure of this type can also present a number of issues relating to theft of intellectual property. This finding was duly reported to the affected vendor within 24 hours of being identified.

  • My limited edition print art collection

    4.94 of 84 votes

    Over the years I've collected a number of prints from a particular artist. This artist is David Ambarzumjan, an incredibly talented painter from Germany that aims to combine abstract and surrealistic elements to express his fascination for nature in all its diversity and unpredictability. His works can be found in private collections and exhibitions all around the world, some of which I've been lucky enough to acquire for decorative placements throughout my home. Here I've documented some of the limited edition prints I've acquired from his Brushstrokes in Time collection and any I'm still interested in buying. I'll endeavour to update this blog post whenever I'm successful in my efforts. THIS WAS WATER(1/100) ZEBRA CROSSING(15/50) SHARKS IN MONTMARTE(6/50) WATERSHED(6/100) STRAY(8/50) HUMAN NATUREOPEN EDITION RECOVEROPEN EDITION BREATHE**SEEKING** GEZEITENWELLE(97/100) If you have any of these limited edition prints which I've listed as **SEEKING** that you would be willing to sell, please feel free to reach out to me via my contact form with offers.

  • How to install a Wazuh SIEM server on a Raspberry Pi 4B

    4.83 of 87 votes

    With many security professionals now working remotely from home, some are looking at ways in which they can improve their home environment network and endpoint security. Commercial SIEM solutions are often considered too costly and complicated to deploy, but free lightweight open-source solutions offering minimal overhead such as Wazuh provide a good compromise for those of modest means aiming to mature the security of their home or small business environment.This is my walkthrough on how to install the Wazuh server manager onto a Raspberry Pi 4B as an all-in-one deployment. I noticed there was no clear guidance online on how to go about doing this for a Raspberry Pi 4B, only a significant number of online posts outlining the installation and deployment difficulties people have faced. So here I've decided to document the process I took, tested and validated from start to finish, with clear directions anyone can follow.By following this guide you can run your own open-source SIEM solution on a Raspberry Pi 4B at home. This is great not only for existing security professionals looking to improve the resiliency of their home setup, but also for those new to the information security industry seeking to gain hands-on experience in SIEM deployment, management, and other SIEM/SOC related activities. Getting started To get started, I used a Raspberry Pi 4B with 8GB RAM and a 128GB SanDisk Extreme SD card for storage. Whilst the Raspberry Pi 4B I used for this project was custom built from TurboPi with high-end hardware for this particular purpose, any Raspberry Pi 4B with adequate hardware capable of running Raspbian OS (buster or greater) leveraging the AArch64 64-bit extension of the ARM architecture should be sufficient. Walkthrough Install a Raspberry Pi 64-bit ARM OS First download and install the official Raspberry Pi Imager. Now download the latest Raspi OS ARM64 .zip image from the official repo (make sure it's the latest version). Open the Raspberry Pi Imager application. Select the CHOOSE OS button and in the dropdown list select the Use custom option. Select the Raspi OS ARM64 .zip file you just downloaded. Select the SD storage card to write this to. Proceed with the prompted erasure, formatting, and writing activities to install the OS to your SD card. The last step here is to write an empty text file named 'ssh' (no file extension) to the root of the directory of the SD card. When the device boots and sees the 'ssh' file present, it will automatically enable SSH, which will allow us to remotely access the device command line in a headless state. Identify your Raspberry Pi local IP address For this I use a Kali VM in VirtualBox, but any flavour of Linux distro can acheive the same. As the guest VM is not visible to the host under the default VirtualBox NAT settings, you need to change your VM network settings to bridge the network adapter between your host machine and the guest VM. Once the network adapter is bridged, we need to identify the Raspberry Pi IP address on the network. There are a few ways to do this (such as logging directly into the router), but we can also use an arp command with the -na flag to display the local network address translation tables and pipe the output using grep to pull any MAC addresses that begin with identifiers we're interested in.arp -na | grep -i dc:a6:32 Raspberry Pi MAC addresses always use the same OUI (Organizational Unique Identifier) in their MAC address (b8:27:eb for all Raspberry Pi devices except Raspberry Pi 4B which uses dc:a6:32). Connecting to the Raspberry Pi Now we have the device IP address (in this example mine is assigned 192.168.1.93), we can SSH into it using default credentials ssh pi@192.168.1.93 with the default password raspberry and get started. Change hostname and update First you may wish to change the hostname from raspberry to wazuh (or something else). To do this run the command sudo raspi-config and navigate to System Options > Hostname using the GUI.Type in your desired hostname and hit Enter, then return to the main menu of the GUI and select Update. Once the device has finished updating, navigate to the to Finish button to save your new raspi-config settings.For the hostname change to take effect, reboot the device using the command sudo reboot, then SSH back in using the same credentials ssh pi@192.168.1.93 with the default password raspberry once the reboot is complete. Enable login as root Then if you want to login as root using SSH or WinSCP you need to edit the config of SSHD. Login, and edit the sshd_config file using sudo nano /etc/ssh/sshd_configFind the line containing PermitRootLogin prohibit-passwordEdit this to reflect PermitRootLogin yes Close and save the file, then reboot or restart sshd service using /etc/init.d/ssh restartSet a root password if there isn't one already using sudo passwd rootNow you can login as root (I recommend using a strong password or SSH keys). Now proceed to sudo up using sudo su and continue the next steps as root. Update the Raspberry Pi packages To update the Raspberry Pi, first ensure the VM you're connecting from over SSH into has an Internet connection and then run the command apt update && apt upgrade -y Once the update and upgrade process is complete, pull down the required packages for the next steps using:apt-get install apt-transport-https zip unzip curl gnupg wget libcap2-bin software-properties-common lsb-release -y Install Java 11 echo 'deb http://deb.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/backports.list apt update -y apt install openjdk-11-jdk -y Install Elasticsearch OSS Fetch the Elasticsearch OSS arm64 installation package:wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.10.2-arm64.deb Install the Elasticsearch OSS package:dpkg -i elasticsearch-oss-7.10.2-arm64.deb Install Open Distro for Elasticsearch Download and add the signing keys for the repositories:wget -qO - https://d3g5vo6xdbdb9a.cloudfront.net/GPG-KEY-opendistroforelasticsearch | sudo apt-key add - wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - Add the repository definitions:echo "deb https://d3g5vo6xdbdb9a.cloudfront.net/apt stable main" | sudo tee -a /etc/apt/sources.list.d/opendistroforelasticsearch.list Update the packages:apt-get update -y Install the Open Distro for Elasticsearch package:apt install opendistroforelasticsearch -y Configure and run Elasticsearch Run the following command to download the configuration file /etc/elasticsearch/elasticsearch.yml:curl -so /etc/elasticsearch/elasticsearch.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/7.x/elasticsearch_all_in_one.yml Now we need to add users and roles in order to use the Wazuh Kibana properly. Run the following commands to add the Wazuh users and additional roles in Kibana:curl -so /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/roles/roles.ymlcurl -so /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles_mapping.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/roles/roles_mapping.ymlcurl -so /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/roles/internal_users.yml Remove the demo certificates:rm /etc/elasticsearch/esnode-key.pem /etc/elasticsearch/esnode.pem /etc/elasticsearch/kirk-key.pem /etc/elasticsearch/kirk.pem /etc/elasticsearch/root-ca.pem -f Download the wazuh-cert-tool.sh:curl -so ~/wazuh-cert-tool.sh https://packages.wazuh.com/resources/4.2/open-distro/tools/certificate-utility/wazuh-cert-tool.shcurl -so ~/instances.yml https://packages.wazuh.com/resources/4.2/open-distro/tools/certificate-utility/instances_aio.yml Run the wazuh-cert-tool.sh to generate the certificates:bash ~/wazuh-cert-tool.sh Move the Elasticsearch certificates to their corresponding location for deployment:mkdir /etc/elasticsearch/certs/mv ~/certs/elasticsearch* /etc/elasticsearch/certs/mv ~/certs/admin* /etc/elasticsearch/certs/cp ~/certs/root-ca* /etc/elasticsearch/certs/ Enable and start the Elasticsearch service:systemctl daemon-reloadsystemctl enable elasticsearchsystemctl start elasticsearch Run the Elasticsearch securityadmin script to load the new certificates information and start the cluster:export ES_JAVA_HOME=/usr/share/elasticsearch/jdk/ && /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -nhnv -cacert /etc/elasticsearch/certs/root-ca.pem -cert /etc/elasticsearch/certs/admin.pem -key /etc/elasticsearch/certs/admin-key.pem Run the following command to ensure the installation is successful:curl -XGET https://localhost:9200 -u admin:admin -k An example response should look as follows: The Open Distro for Elasticsearch performance analyzer plugin is installed by default and can have a negative impact on system resources. The official Wazuh documentation recommends removing this with the following command /usr/share/elasticsearch/bin/elasticsearch-plugin remove opendistro-performance-analyzer and restarting the Elasticsearch service afterwards. Install and run the Wazuh manager Install the GPG key:curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add - Add the repository definition:echo "deb https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list Update the Wazuh packages:apt-get update -y Install the Wazuh manager package:apt-get install wazuh-manager Enable and start the Wazuh manager service:systemctl daemon-reloadsystemctl enable wazuh-managersystemctl start wazuh-manager Run the following command to check if the Wazuh manager is active:systemctl status wazuh-manager An example response should look as follows: Install and configure Filebeat Add the repository definition:echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list Fetch the Filebeat arm64 installation package:wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-7.12.1-arm64.deb Install the Filebeat package:dpkg -i filebeat-oss-7.12.1-arm64.deb Download the pre-configured Filebeat config file used to forward Wazuh alerts to Elasticsearch:curl -so /etc/filebeat/filebeat.yml https://packages.wazuh.com/resources/4.2/open-distro/filebeat/7.x/filebeat_all_in_one.yml Download the alerts template for Elasticsearch:curl -so /etc/filebeat/wazuh-template.json https://raw.githubusercontent.com/wazuh/wazuh/4.2/extensions/elasticsearch/7.x/wazuh-template.jsonchmod go+r /etc/filebeat/wazuh-template.json Download the Wazuh module for Filebeat:curl -s https://packages.wazuh.com/4.x/filebeat/wazuh-filebeat-0.1.tar.gz | tar -xvz -C /usr/share/filebeat/module Copy the Elasticsearch certificates into /etc/filebeat/certs:mkdir /etc/filebeat/certscp ~/certs/root-ca.pem /etc/filebeat/certs/mv ~/certs/filebeat* /etc/filebeat/certs/ Enable and start the Filebeat service:systemctl daemon-reloadsystemctl enable filebeatsystemctl start filebeat To ensure that Filebeat has been successfully installed, run the following command:filebeat test output An example response should look as follows: Install and configure Kibana Install the Kibana package:apt-get install opendistroforelasticsearch-kibana Download the Kibana configuration file:curl -so /etc/kibana/kibana.yml https://packages.wazuh.com/resources/4.2/open-distro/kibana/7.x/kibana_all_in_one.yml Create the /usr/share/kibana/data directory:mkdir /usr/share/kibana/datachown -R kibana:kibana /usr/share/kibana Install the Wazuh Kibana plugin. The installation of the plugin must be done from the Kibana home directory as follows:cd /usr/share/kibanasudo -u kibana /usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/4.x/ui/kibana/wazuh_kibana-4.2.5_7.10.2-1.zip Copy the Elasticsearch certificates into the Kibana configuration folder:mkdir /etc/kibana/certscp ~/certs/root-ca.pem /etc/kibana/certs/mv ~/certs/kibana* /etc/kibana/certs/chown kibana:kibana /etc/kibana/certs/* Link Kibana’s socket to privileged port 443:setcap 'cap_net_bind_service=+ep' /usr/share/kibana/node/bin/node Enable and start the Kibana service:systemctl daemon-reloadsystemctl enable kibanasystemctl start kibana Check the Kibana service status to ensure it's running:systemctl status kibana An example response should look as follows: Open the Kibana interface Visit the Raspberry Pi 4B device IP address in your browser (e.g my interface is reachable at https://192.168.1.93). Upon the first access to Kibana, the browser shows a warning message stating that the certificate was not issued by a trusted authority. An exception can be added in the advanced options of the web browser or, for increased security, the root-ca.pem file previously generated can be imported to the certificate manager of the browser. Alternatively, a certificate from a trusted authority can be configured. Login to Kibana using the default user credentials admin with the password admin. For security purposes I recommend these credentials are changed. Change the default passwords To change the default credentials for all users residing in the internal_users.yml file, run the following command:bash wazuh-passwords-tool.sh -a An example response should look as follows: Remember to take note of these credentials or save them into a password manager if you have one. Next we need to also also update the credentials for Filebeat and Kibana (if these were not already covered by the wazuh-passwords-tool.sh script). Open and update the Filebeat configuration file:nano /etc/filebeat/filebeat.yml Change the associated password: value. Make sure you make a record of this, then save and exit. Open and update the Kibana configuration file:nano /etc/kibana/kibana.yml Change the associated elasticsearch.password: value. Make sure you make a record of this, then save and exit. Restart all services for the changes to take effect:systemctl restart wazuh-managersystemctl restart filebeatsystemctl restart kibana Congratulations, you've now installed the Wazuh server manager onto your Raspberry Pi. Now you can install the Wazuh agents on any devices you want to onboard to monitor security related events from within the server manager interface. The Wazuh agent installation guide is relatively simple and can be found here. I hope this tutorial helped.

  • I hacked the Ministry of Defence so they sent me this coin

    • General
    • by Jacob Riggs
    • 28-09-2021
    4.90 of 82 votes

    The UK Ministry of Defence (MoD) sent me a VDP challenge coin for my finding and responsible disclosure of a critical (9.6 CVSS) severity vulnerability. Together with the coin was a small thank you note.Thank you! The Ministry of Defence takes the security of our systems seriously. To show our appreciation for all your time and effort, we would like to reward you with a Vulnerability Disclosure hacker coin. This echos a similar response to when I hacked the Dutch government and they sent me a t-shirt.