Giter VIP home page Giter VIP logo

probr-core's Introduction

Probr - A generic WiFi tracking system

With the vastly increased number of wireless enabled devices, the large amount of data produced by its packets contain large opportunities for usage in research and daily life. By passively listening to wireless signals, it is possible to gather information that can be used to answer interesting questions. This project aims at developing a system, which helps researches, developers and engineers collecting such data for analysis tasks.

It is a master student research project at the Communication Systems Group of the Department of Informatics, at the University of Zurich, Switzerland.

probr-core

The probr-core project provides the core functionality for setting up devices for WiFi sniffing, managing them remotely and actually collecting and sniffing WiFi probe request packets.

Technology

The frameworks, languages, tools and technologies used and required in the probr-core project are:

Devices

The devices used for sniffing WiFi packets must fulfill the following requirements:

  • *NIX operating system or simliar (Debian, Ubuntu, OpenWRT, Mac OS X, Raspbian etc.)
  • wget installed
  • tcpdump installed
  • internet access
  • wireless interface with monitor mode capabilities

Installation

We highly recommended to use Virtualenv to manage the python environment for probr-core.

After cloning the project, create a virtual environment for probr outside the probr-core directory:

virtualenv .env_probr

Activate the virtual python environment:

source .env_probr/bin/activate

Go into the probr-core directory. Now install the python dependencies of the project:

pip install -r requirements.txt

Now, install the frontend and web dependencies using bower:

bower install

You're pretty much set to start probr-core at this moment. What is left to do is:

Create the DB tables:

python manage.py migrate

Create an admin user for the Django webproject:

python manage.py createsuperuser

Make sure the mongodb deamon is running:

mongod &

Also, the redis-server must be running before you can use probr-core

redis-server &

Start-Up

Finally, you're ready to start your probr-core server by running:

python manage.py runserver

In order for the data to be processed and entered into the database, you need to start the celery worker:

celery worker -A probr

And you can check it out by visiting http://localhost:8000.

probr-core's People

Contributors

dcale avatar dschoeni avatar gmazlami avatar joe4dev avatar sebastian-stephan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

probr-core's Issues

Signal strength is always -256

Awesome project! I'm having a little trouble getting the signal strength (either from tcpdump or from the parsing). As a minimal test, I'm using

tcpdump -e -ni (myinterface) -s 0 -w capture-%s.pcap -G 5 -tttt -vvvv type mgt subtype probe-req

and then I parse it using a scaled down version of your code: parse_packet.py which results in something like the following:

{'inserted_at': datetime.datetime(2017, 11, 25, 10, 8, 46, 667550), 'mac_address_src': b'ac3743a88dc7', 'time': datetime.datetime(2017, 11, 25, 9, 59, 32, 248626), 'mac_address_dst': b'ffffffffffff', 'signal_strength': -256, 'ssid': b''}

where the signal_strength is always -256. However, I have some other code for doing this in scapy: packets.py which will give the following correct RSSI for that same MAC:

('ac:37:43:a8:8d:c7', 'a0:63:91:2b:9e:65', '-75')
('ac:37:43:a8:8d:c7', 'a0:63:91:2b:9e:65', '-72')
('ac:37:43:a8:8d:c7', 'a0:63:91:2b:9e:65', '-82')
('ac:37:43:a8:8d:c7', 'a0:63:91:2b:9e:65', '-82')

so I know that its not my interface. Is it a problem with tcpdump? I also tried adding the -y IEEE802_11_RADIO flag but that had the same problem. Could it be a problem with the parsing? Please let me know thanks!

(Also full disclosure, I built something very similar awhile ago: https://github.com/schollz/find-lf, but I think your project is above and beyond what I did, great job!)

performance-problems on statuses/?device

Some REST-Endpoints on Core show massive performance problem after a certain run-time. I suspect certain specific endpoints are involved in this, specifically statuses/?device=UUID&limit=10 which takes about 2 seconds after reloading to give a result. I guess if the database includes lots and lots of statuses, this becomes a problem.

Maybe we should either

  • Add a TTL to statuses (as they are increasingly unimportant after time)
  • Keep view for the last 10 statuses of each device to prevent going over the ORM and potentially inefficient queries

probr-core git install errors

Greetings,

Thank you in advance for your support!

We have been trying to install probr-core on Ubuntu 17.04 following the steps on the https://github.com/probr/probr-core, and all is good until:
Create the DB tables:
python manage.py migrate
we get the following errors...
~/Downloads/probr-core$ python manage.py runserver
Traceback (most recent call last):
File "manage.py", line 10, in
execute_from_command_line(sys.argv)
File "/home/roddyr/Downloads/.env_probr/local/lib/python2.7/site-packages/django/core/management/init.py", line 338, in execute_from_command_line
utility.execute()
File "/home/roddyr/Downloads/.env_probr/local/lib/python2.7/site-packages/django/core/management/init.py", line 303, in execute
settings.INSTALLED_APPS
File "/home/roddyr/Downloads/.env_probr/local/lib/python2.7/site-packages/django/conf/init.py", line 48, in getattr
self._setup(name)
File "/home/roddyr/Downloads/.env_probr/local/lib/python2.7/site-packages/django/conf/init.py", line 44, in _setup
self._wrapped = Settings(settings_module)
File "/home/roddyr/Downloads/.env_probr/local/lib/python2.7/site-packages/django/conf/init.py", line 92, in init
mod = importlib.import_module(self.SETTINGS_MODULE)
File "/usr/lib/python2.7/importlib/init.py", line 37, in import_module
import(name)
File "/home/roddyr/Downloads/probr-core/probr/init.py", line 5, in
from .celery import app as celery_app
File "/home/roddyr/Downloads/probr-core/probr/celery.py", line 5, in
from celery import Celery
File "/home/roddyr/Downloads/.env_probr/local/lib/python2.7/site-packages/celery/init.py", line 130, in
from celery import five
File "/home/roddyr/Downloads/.env_probr/local/lib/python2.7/site-packages/celery/five.py", line 149, in
from kombu.utils.compat import OrderedDict # noqa
File "/home/roddyr/Downloads/.env_probr/local/lib/python2.7/site-packages/kombu/utils/init.py", line 19, in
from uuid import UUID, uuid4 as _uuid4, _uuid_generate_random
ImportError: cannot import name _uuid_generate_random

We are really looking forward to getting probr up and running, it looks AWESOME!

Respectfully,

Can add Devices in probr-core GUI, but can not upload pcaps, or access the Probr-Analysis GUI

Greeting,

Thank you in advance for your support, and awesome project!

We are testing the same install/setup on Ubuntu 17.04, and Ubuntu 14.04. We get the same results on both OS versions.

PROBLEM
The Core GUI is available on port 80, but we cant find/open the Probr-Analysis GUI on localhost:9000, or any of the ports listed in the docker instances (see below). Only port 80 on 83105ff61b69 (see below) is available via a web browser, i.e. localhost gives us the core GUI.

How can we access the Probr-Analysis pages/functionally?

Setup Details....

  1. We have followed the setup instruction from http://probr.ch/#/installation/core & http://probr.ch/#/installation/analysis and have the core GUI running, and we can see the DEVICES and the CAPTURES links.
  2. We have added the Ubuntu 17.04 and Ubuntu 14.04 machine as devices, and we can successfully run commands, and start tcpdump from the Terminal, i.e.
    $ wget -qO- http://localhost/static/bootstrap.sh | sh -s 502676d532cf6a4b7732f33afe9b08d5fafabb422c7ebc36b7ab521545fcf8fc http://localhost
    Then from the pobr-core terminal:
    sudo ifconfig wlx00c0ca904edc down
    sudo iwconfig wlx00c0ca904edc mode monitor
    sudo ifconfig wlx00c0ca904edc up
    sudo mkdir -p captures
    sudo tcpdump -e -ni wlx00c0ca904edc -s 0 -w captures/capture-%s.pcap -G 5 type mgt subtype probe-req

Note: the pcap files are being generated in /home/user/probr/capters

~/Downloads/probr-analysis$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7c32b07e645d probr/analysis:latest "node dist/server/..." 53 minutes ago Up 42 minutes 0.0.0.0:8080->8080/tcp probranalysis_analysis_1
b3c66d624e27 probr/analysis:latest "node dist/worker/..." 53 minutes ago Up 42 minutes probranalysis_analysis_worker_1
83105ff61b69 probr/core-nginx:latest "/usr/sbin/nginx" 24 hours ago Up 42 minutes 0.0.0.0:80->80/tcp probrcore_nginx_1
f34b967d7266 probr/core:latest "sh install/docker..." 24 hours ago Up 43 minutes 0.0.0.0:8001->8001/tcp probrcore_web_1
275d849ce073 probr/core:latest "sh install/docker..." 24 hours ago Up 43 minutes probrcore_worker_1
8137c2624451 postgres:9.4.5 "/docker-entrypoin..." 24 hours ago Up 43 minutes 5432/tcp probrcore_postgres_1
204344ac045e tutum/influxdb:0.9 "/run.sh" 24 hours ago Up 43 minutes 0.0.0.0:8083->8083/tcp, 0.0.0.0:8086->8086/tcp probrcore_influxdb_1
4a63e0ba972c mongo:3.0.6 "/entrypoint.sh mo..." 24 hours ago Up 43 minutes 27017/tcp probrcore_mongodb_1
51473482b030 probr/core:latest "sh install/docker..." 24 hours ago Up 43 minutes 0.0.0.0:8002->8002/tcp probrcore_websocket_1
39fd639e36e8 redis:2.8.19 "/entrypoint.sh re..." 24 hours ago Up 43 minutes 6379/tcp probrcore_redis_1

Thank you

clean up UI

We should clean up the UI and give every button a proper label:

image

handle big logs of commands

If a command runs for a very long time, the log file gets pretty big (50mb and more). It the command gets cancelled by the user, the device will upload this logfile to the probr-core. We should make sure that logs these bigs do not get sent or truncated. Maybe only the last lines?

probr-core capturing pcaps, but probr-analysis "No Packets received yet.

Thank you in advance for your support!

ISSUE:
With both docker and local/python/virtualenv installations we get probr-core (/web/captures) successfully capturing pcaps from wifi probes, and the analysis GUI (/analysis/packets) is there, but the /analysis/packets page shows "0 new Packets since loading.", and from the /analysis/live page we see "No Packets received yet.", i.e. no pcap or device data is visible in the analysis UI. Please note that we get the same results with analysis GUI using the the docker, and python installations tested on Ubuntu 14.04, 16.04, and 17.04.

We installed core and analysis using the docker instructions from probr.ch. We tested the docker installations on Ubuntu 14.04, 16.04, and 17.04, along with the tips from issues #22, and #58, i.e. after starting both core and analysis, the analysis GUI is not available, then from the probr-core directory run "docker-compose -f docker-compose-analysis.yml up -d" only then the analysis GUI becomes available. We have also tested installing core and analysis locally via the python install instructions on Ubuntu 14.04, 16.04, and 17.04, and with both docker and python/local installations we get the same results, probr-core (web/captures) works great and is capturing pcaps from a wifi probe, but the analysis GUI (/packets) GUI shows "0 new Packets since loading.", and from the /live page we see "No Packets received yet.".

We have looked at the logs from each of the containers, and the only error we see is from 48184b666161 probr/core:latest (see below), i.e. sudo docker logs --follow 48184b666161
ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused
[2018-02-10 12:56:19,072: ERROR/MainProcess] Task captures.tasks.processCapture[c6ee7777-9f6b-435a-b4d8-e567877b6f80] raised unexpected: ServerSelectionTimeoutError('localhost:27017: [Errno 111] Connection refused',)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/celery/app/trace.py", line 438, in protected_call
return self.run(*args, **kwargs)
File "/app/captures/tasks.py", line 26, in processCapture
handler.handle(capture)
File "/app/handlers/handlers.py", line 53, in handle
packets.insert_one(jsonPacket)
File "/usr/local/lib/python2.7/site-packages/pymongo/collection.py", line 466, in insert_one
with self._socket_for_writes() as sock_info:
File "/usr/local/lib/python2.7/contextlib.py", line 17, in enter
return self.gen.next()
File "/usr/local/lib/python2.7/site-packages/pymongo/mongo_client.py", line 659, in _get_socket
server = self._get_topology().select_server(selector)
File "/usr/local/lib/python2.7/site-packages/pymongo/topology.py", line 120, in select_server
address))
File "/usr/local/lib/python2.7/site-packages/pymongo/topology.py", line 96, in select_servers
self._error_message(selector))

FYI,
probr1404:~/probr-core$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d6c9a81e3e45 probr/core-analysis-nginx:latest "/usr/sbin/nginx" 15 minutes ago Up 15 minutes 0.0.0.0:80->80/tcp probrcore_nginx_1
75fb7b3ba694 probr/analysis:latest "node dist/worker/ap…" 2 weeks ago Up 16 minutes probranalysis_analysis_worker_1
ef527c6343d5 probr/analysis:latest "node dist/server/ap…" 2 weeks ago Up 16 minutes 0.0.0.0:8080->8080/tcp probranalysis_analysis_1
48184b666161 probr/core:latest "sh install/docker/w…" 2 weeks ago Up 16 minutes probrcore_worker_1
b7e259ff7c4a probr/core:latest "sh install/docker/w…" 2 weeks ago Up 16 minutes 0.0.0.0:8001->8001/tcp probrcore_web_1
7db0ebc27066 tutum/influxdb:0.9 "/run.sh" 2 weeks ago Up 16 minutes 0.0.0.0:8083->8083/tcp, 0.0.0.0:8086->8086/tcp probrcore_influxdb_1
f6d6d1e74190 postgres:9.4.5 "/docker-entrypoint.…" 2 weeks ago Up 16 minutes 5432/tcp probrcore_postgres_1
c17fdb514c59 mongo:3.0.6 "/entrypoint.sh mong…" 2 weeks ago Up 16 minutes 27017/tcp probrcore_mongodb_1
8c576db60fe5 probr/core:latest "sh install/docker/w…" 2 weeks ago Up 16 minutes 0.0.0.0:8002->8002/tcp probrcore_websocket_1
2c6c46576152 redis:2.8.19 "/entrypoint.sh redi…" 2 weeks ago Up 16 minutes 6379/tcp probrcore_redis_1

Thank you again for your support!

Respectfully,

403 errors when running core

I am seeing 403 errors when starting up Django. I tried changing my user/password in probr/settings.py, but that resulted in a "password authentication failed for user" error so postgres for all the settings appears to be correct.

(.env_probr) pi@raspberrypi:~/probr-core $ python manage.py runserver 0:8000
Performing system checks...

System check identified no issues (0 silenced).
June 17, 2018 - 22:09:03
Django version 1.8.17, using settings 'probr.settings'
Starting development server at http://0:8000/
Quit the server with CONTROL-C.
[17/Jun/2018 22:09:06] "POST /api-device/statuses/ HTTP/1.1" 403 69
[17/Jun/2018 22:09:07] "GET /api-device/commands/?status=0 HTTP/1.1" 500 74527
Authentication:  Apikey=---
Authentication:  User=probr
[17/Jun/2018 22:09:07] "POST /api-device/statuses/ HTTP/1.1" 403 69
[17/Jun/2018 22:09:08] "GET /api-device/commands/?status=0 HTTP/1.1" 500 74527
- Broken pipe from ('192.168.1.14', 34866)

If I ignore these, I can get to the core web page, I can start capturing and have seen my pcap files come in so maybe its not my main issue? Ultimately I am unable to get Analysis to see any pcaps.

Issue when starting postgres

So after fixing some dependencies in requirements.txt (greenlet v0.4.6 is incompatible with gevent, up'd greenlet to the min v0.4.7), my issue has changed slightly. I still have postgres issues, but now it doesn't appear to be starting the container properly:

pi@raspberrypi:~/probr-core $ docker-compose up -d
Removing probrcore_worker_1
Removing probrcore_web_1
Starting probrcore_datamongodb_1
Starting probrcore_dataprobr_1
Starting probrcore_datapostgres_1
Starting probrcore_websocket_1
Starting probrcore_datainflux_1
Starting probrcore_redis_1
Starting probrcore_mongodb_1
Starting probrcore_postgres_1
Starting probrcore_influxdb_1
Starting f0e8b60b1feb_probrcore_worker_1
Starting b9a15ff0a450_probrcore_web_1

ERROR: for web  Cannot start service web: Cannot link to a non running container: /abdeef552a8b_probrcore_postgres_1 AS /b9a15ff0a450_probrcore_web_1/probrcore_postgres_1

ERROR: for worker  Cannot start service worker: Cannot link to a non running container: /abdeef552a8b_probrcore_postgres_1 AS /f0e8b60b1feb_probrcore_worker_1/postgres
ERROR: Encountered errors while bringing up the project.

Trying to just start up postgres I get:

pi@raspberrypi:~/probr-core $ docker-compose up postgres
Starting probrcore_datapostgres_1
Starting probrcore_postgres_1
Attaching to probrcore_postgres_1
postgres_1      | standard_init_linux.go:190: exec user process caused "exec format error"
probrcore_postgres_1 exited with code 1

Any idea where I am going wrong?

upgrade all dependencies to latest version

all dependencies in requirements.txt are very old now. Upgrade them to their latest version where possible.

Hint: last time I tried this restframework was blocking post requests because of late modification issues

multi-execution of commands for devices

Make it possible to execute commands on multiple devices at the same time.

Proposal:

  • select multiple devices in overview
  • select command from dropdown
  • ask for confirmation whether its ok to execute command XYZ on devices XYZ

docker exec -ti probrcore_web_1 python manage.py createsuperuser

Ran through the directions and hit a snag:

~/TESTING/probr-analysis$ docker exec -ti probrcore_web_1 python manage.py createsuperuser
Traceback (most recent call last):
File "manage.py", line 10, in
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/site-packages/django/core/management/init.py", line 338, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/site-packages/django/core/management/init.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 390, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 50, in execute
return super(Command, self).execute(*args, **options)
File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 441, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 81, in handle
default_username = get_default_username()
File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/management/init.py", line 177, in get_default_username
auth_app.User._default_manager.get(username=default_username)
File "/usr/local/lib/python2.7/site-packages/django/db/models/manager.py", line 127, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 328, in get
num = len(clone)
File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 144, in len
self._fetch_all()
File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 965, in _fetch_all
self._result_cache = list(self.iterator())
File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 238, in iterator
results = compiler.execute_sql()
File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 829, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/usr/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 97, in exit
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/usr/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "auth_user" does not exist
LINE 1: ...user"."is_active", "auth_user"."date_joined" FROM "auth_user...

can't post captures to probe-core or see in analysis

hi

I have both probr-core and probe-analysis working from docker but i can't get the packet captures from my pi running kali to probe-core !!! when i issue the command below on my kali raspberry pi

Upload Existing

for file in captures/*.pcap
do
post_file '/api-device/device-captures/' "$file" && rm "$file"
done

i get bash: post_file command not found

i feel i am close to getting this woking!

can you help???

Improve command templates for sniffing and uploading

Issues

  • hardcoded values
  • unclear semantics of variables with the same name but other meaning
  • some captures are not upload when switching from uploading existing captures to inotifywait loop (challenge: do not upload currently locked files!)

Identify further room for improvements (e.g., using pipes for capture uploads)

add default actions

add more default actions:

  • update_scripts && check_for_updates
  • kill_all_commands

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.