A tool for analysis and visualization of DDoS attacks from PCAP files
This tool consist of three parts:
- The
miner
subproject is a packet decoder and feature extractor that produces output as JSON files and communicates over stdout or an IPC channel if available. api
is a RESTful api based on Express.js which orchestrates theminer
package if required.- The
frontend
is a Vue.js based SPA that renders visualizations obtained from the api.
There are two ways to use this project:
- Just running the miner through the shell as described under
Development > miner
. - Running the api (locally or on a server) and serving the frontend through a webserver
Clone the project from github:
git clone [email protected]:ddosgrid/ddos-visualization.git
Enter the miner
subproject and install the necessary dependencies. Make sure you are running Node.JS version 10 and that you lave libpcap installed.
cd miner
npm i
After that the miner package can be imported as an NPM module or it can be run manually through the shell. Alternatively one can use the miner as a subprocess where it will communicate over an IPC channel. For example to run it through a shell:
node index.js pcap_path=/path/to/your/pcap-file
This will run the miner which will render its result to stdout:
node index.js pcap_path=/path/to/your/capture.pcap
✓ Input check completed
✓ Analysis started
✓ Setup of the following miners has completed:
- Miscellaneous Metrics
- Top 20 UPD/TCP ports by number of segments
- Number of segments received over all TCP/UDP ports
- Connection states of TCP segments
- Analysis of IPv4 vs IPv6 traffic (based on packets)
- Top 5 source hosts (IPv4)
- Top 100 source hosts (IPv4)
✓ Decoding has finished, starting post-parsing analysis
✓ All miners have finished.
Run it as a subprocess:
const child_process = require('child_process')
const fork = child_process.fork
const path = require('path')
// Options to run the miner as subprocess
var program = path.resolve('../miner/index.js')
var args = [ `pcap_path=${pcapPath}` ]
var options = { stdio: [ 'ipc' ] }
var childProcess = fork(program, args, options)
// Once the miner finishes he will send a 'message' with file paths
// pointing to the analysis results
childProcess.on('message', function (minerResults) {
var parsedResults = JSON.parse(minerResults)
// Do something with the JSON files
})
childProcess.on('exit', (code) => {
if(code !== 0) {
// Something went wrong
}
})
Setting up the api is straightforward simply fetch the dependencies and start the main javascript file. Make sure that you have previously installed the dependencies of the miner!
cd miner; npm i; cd ..;
cd api; npm i
Now simply run it and optionally pass the port where it should listen:
node index.js
or
export PORT=1234; node index.js
Enter the frontend
subproject and run it after fetching its dependencies
npm i; npm run serve
This will automatically rebuild the project if a file changes.
To use the application you will need to let it connect to an api instance.
In development mode (npm run serve
) it will always connect to localhost:3000
.
You can run the api locally as described in the previous section or if you don't plan to work on the backend part you can just run the latest image from Docker hub with one command:
docker run -it -p 3000:3000 ddosgrid/ddosgrid-api
Our frontend is continuosly integrated and deployed by a GitHub action to a GitHub pages branch.
If you are building manually simply run npm run build
and then deploy the dist
folder.
This will create a frontend that automatically connects to our hostname in production. If you want to change the hostname of the API please edit frontend/.env.production
.
Our api is continuosly integrated and built as a Docker image and pushed to that registry. From there you can run it with one command:
cd api; docker-compose up
From there we simply pull the docker file in a 5minute time interval from Docker Hub and then redeploy the service using CRON:
*/5 * * * * docker pull ddosgrid/ddosgrid-api:latest; docker service update ddosgridapi_ddosgridapi --image ddosgrid/ddosgrid-api:latest
This docker compose file will run the API on local interface and also expose it as a TOR service. You can then connect to that onion service or place a reverse-proxy in front of the local server. With NGINX, this would look as follows:
server {
listen 443 ssl;
server_name api.ddosgrid.online;
# Configure max upload size
client_max_body_size 1G;
# Configure SSL
ssl_certificate /path/to/your/ssl/cert;
ssl_certificate_key /path/to/your/ssl/key;
# Proxy to the locally running server
location / {
proxy_pass http://localhost:3000;
}
}
Since we don't want that our server can be accessed directly without going through the proxy, we recommend blocking external access. Otherwise one could e.g. surpass the file size limit: This would drop all packets destined to our server that are being sent from outside our host and would accept traffic to our server coming from our NGINX instance:
local_server_port=3000
inbound_wan_interface=eth0
iptables -I FORWARD --protocol tcp --destination-port $local_server_port -j DROP -i $inbound_wan_interface