Giter VIP home page Giter VIP logo

reboot's Introduction

Offensive Web Testing Framework

Build staus License (3-Clause BSD) python_3.6 python_3.7 python_3.8

OWASP OWTF is a project focused on penetration testing efficiency and alignment of security tests to security standards like the OWASP Testing Guide (v3 and v4), the OWASP Top 10, PTES and NIST so that pentesters will have more time to

  • See the big picture and think out of the box
  • More efficiently find, verify and combine vulnerabilities
  • Have time to investigate complex vulnerabilities like business logic/architectural flaws or virtual hosting sessions
  • Perform more tactical/targeted fuzzing on seemingly risky areas
  • Demonstrate true impact despite the short timeframes we are typically given to test.

The tool is highly configurable and anybody can trivially create simple plugins or add new tests in the configuration files without having any development experience.

Note: This tool is however not a silverbullet and will only be as good as the person using it: Understanding and experience will be required to correctly interpret tool output and decide what to investigate further in order to demonstrate impact.

Requirements

OWTF is developed on KaliLinux and macOS but it is made for Kali Linux (or other Debian derivatives)

OWTF supports Python3.

OSX pre-requisites

Dependencies: Install Homebrew (https://brew.sh/) and follow the steps given below:

$ python3 -m venv ~/.virtualenvs/owtf
$ source ~/.virtualenvs/owtf/bin/activate
$ brew install coreutils gnu-sed openssl
# We need to install 'cryptography' first to avoid issues
$ pip install cryptography --global-option=build_ext --global-option="-L/usr/local/opt/openssl/lib" --global-option="-I/usr/local/opt/openssl/include"

Installation

Running as a Docker container:

The recommended way to use OWTF is by building the Docker Image so you will not have to worry about dependencies issues and installing the various pentesting tools.

git clone https://github.com/owtf/owtf
cd owtf
make compose

Installing directly

Create and start the PostgreSQL database server

Using preconfigured Postgresql Docker container (Recommended)

Please make sure you have Docker installed!

Run make startdb to create and start the PostgreSQL server in a Docker container. In the default configuration, it listens on port 5342 exposed from Docker container.

Manual setup (painful and error-prone)

You can also use a script to this for you - find it in scripts/db_setup.sh. You'll need to modify any hardcoded variables if you change the corresponding ones in owtf/settings.py.

Start the postgreSQL server,

  • macOS: brew install postgresql and pg_ctl -D /usr/local/var/postgres start
  • Kali: sudo systemctl enable postgresql; sudo systemctl start postgresql or sudo service postgresql start

Create the owtf_db_user user,

  • macOS: psql postgres -c "CREATE USER $db_user WITH PASSWORD '$db_pass';"
  • Kali: sudo su postgres -c "psql -c \"CREATE USER $db_user WITH PASSWORD '$db_pass'\""

Create the database,

  • macOS: psql postgres -c "CREATE DATABASE $db_name WITH OWNER $db_user ENCODING 'utf-8' TEMPLATE template0;"
  • Kali: sudo su postgres -c "psql -c \"CREATE DATABASE $db_name WITH OWNER $db_user ENCODING 'utf-8' TEMPLATE template0;\""

Installing OWTF

git clone https://github.com/owtf/owtf
cd owtf
python3 setup.py develop
owtf
open `localhost:8009` in the web browser for the OWTF web interface or `owtf --help` for all available commands.

Features

  • Resilience: If one tool crashes OWTF, will move on to the next tool/test, saving the partial output of the tool until it crashed.
  • Flexible: Pause and resume your work.
  • Tests Separation: OWTF separates its traffic to the target into mainly 3 types of plugins:
    • Passive : No traffic goes to the target
    • Semi Passive : Normal traffic to target
    • Active: Direct vulnerability probing
  • Extensive REST API.
  • Has almost complete OWASP Testing Guide(v3, v4), Top 10, NIST, CWE coverage.
  • Web interface: Easily manage large penetration engagements easily.
  • Interactive report:
  • Automated plugin rankings from the tool output, fully configurable by the user.
  • Configurable risk rankings
  • In-line notes editor for each plugin.

License

Checkout LICENSE

Code of Conduct

Checkout Code of Conduct

Links

reboot's People

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reboot's Issues

Reporter service or any other equivalent

The current OWTF report is very interactive but it cannot be exported in its current form. A reporter service can be written (I think this was in the very early releases of OWTF) which exports a nice report with template, findings, and additional pentester's notes into multiple formats.

New installation method

The current method assumes that you are installing on a clean Kali VM. But in the recent version of Kali ie. 2.0, they have chown the dist-packages directory so that other installed tools do not break.
One such issue: owtf/owtf#509
IMHO this is a good time to come up with the new installation plan using virtualenv.

Roadmap for the distributed architecture implementation

This can be good place to discuss/debate the new distributed architecture and its following components:

  • Core architecture (master, manager, slave)
  • The intercomponent messaging framework - eg. RPC, JSON-RPC, etc
  • Internal analytics/monitoring/profiling (new feature)
  • Datastore

This could the starting point for the first draft. @marioskourtesis?

MiTM Proxy

Hello Team,

Do we have any project plan about the new MiTM proxy?
We can not use the same proxy when we are gonna have distributed nodes.

Kind Regards

Slave authentication

Planning on using client certificates to authenticate the slaves. It will be easier to generate certs for each slave and moreover tornado has inbuilt facility to specify client certs in http requests. Check this on providing client key and cert in tornado

Redesign the cache store for the proxy

The current caching system is file-based which on tests having very large number of assessments choke up the host VM in terms of free space. Is it feasible to use datastores such as redis or memcached, which are primarily designed for cache systems?

ZEST as a plugin

Currently Zest is part of the whole transaction infrastructure, which is a lot messy. So, appropriate changes will be done to support Zest like features as a plugin in future. The reason why ZEST is best placed as a plugin is

  • Target specific (Work on only transactions of a target)
  • Run only when user wants to

Workers/Slaves Naming

As of now all the information related to a worker is in memory. Since we are planning for a distributed version, @DePierre has a way of exchanging information between slave and master node.

I also want us to pick the names carefully. i.e A single slave node can have multiple plugin processes running. So is the name slave intuitive? I have one scheme in idea which has Oracle Node is the main node and then we can have Manager Nodes (the present slave ones) and each manager will have "Worker Processes" under it. Not a great scheme but something that can be brainstormed upon

Unit Tests

We will need unit testing for the reboot project and we should start writing them as early as possible.

For unit testing, we could use:

  • unittest (python2 and 3)
    • Defines basic classes for unit tests
    • Can create Suites, has some basic matchers, etc.
  • PyHamcrest (python2 and 3)
    • Defines a large set of matchers that can be useful
  • nose and coverage to run the tests

With these dependencies, we'll be able to have unit tests results like below:

$ nosetests -v -d --with-cov
test_generate_proxy_code (tests.test_python_script.TestPythonTranslator) ... ok
test_generate_script (tests.test_python_script.TestPythonTranslator) ... ok
test_generate_search_code (tests.test_python_script.TestPythonTranslator) ... ok
test_parse_raw_request_https_domain_no_port (tests.test_translator.TestTranslator) ... ok
test_parse_raw_request_https_domain_port (tests.test_translator.TestTranslator) ... ok
test_parse_raw_request_https_ipv4_no_port (tests.test_translator.TestTranslator) ... ok
test_parse_raw_request_https_ipv4_port (tests.test_translator.TestTranslator) ... ok
test_parse_raw_request_https_ipv6_no_port (tests.test_translator.TestTranslator) ... ok
test_parse_raw_request_https_ipv6_port (tests.test_translator.TestTranslator) ... ok

Name                                                Stmts   Miss  Cover   Missing
---------------------------------------------------------------------------------
http_request_translator                                 0      0   100%   
http_request_translator.plugin_manager                 19     16    16%   6-20, 39-46
http_request_translator.python_script                  45     11    76%   29, 32-33, 38-48, 97
http_request_translator.templates                       0      0   100%   
http_request_translator.templates.python_template       6      0   100%   
http_request_translator.translator                    143     98    31%   25-57, 70-119, 130-138, 151-171, 186-187, 194-195, 208-215, 221, 223, 225, 231, 237, 240
http_request_translator.url                            48     26    46%   20, 23-25, 27, 31-37, 48-54, 75-82
http_request_translator.util                            4      0   100%   
---------------------------------------------------------------------------------
TOTAL                                                 265    151    43%   
----------------------------------------------------------------------
Ran 9 tests in 0.031s

OK

Malego-like transforms for OWTF plugins (launch order)

Moved from owtf/owtf:

Idea courtesy of Nicolas Grégoire (@Agarri_FR) thanks!
Literal question: "Is it possible to launch a tool depending of the results of another one? Like whatweb => wpscan?"

To do this in an automated fashion, we could define chains of "tool 1 output => tool 2 input" doing the equivalent of Maltego transforms with tools launched by OWTF.

Following the philosophy in the rest of OWTF, lighter transforms that are likely to execute faster would be run first so that the human has results to analyse ASAP and even if the scan takes forever, the user is able to cancel at any time confident that the most "bang for your buck" tests have been run already.

UPDATE:
@anantshri
we want tool 2 to only run when tool 1 suggest X or we want tool 2 to run as soon as tool 1 output's X

what i mean to say is we can restrict a tool to not run if a database field is not present, but chaining tools will only work if we leverage a full cli interface where we use somekind of a wrapper script to work with such idea's.

if what i wrote sounds absurd, please explain what we want to do here.

@delta24 Also some change is needed in the plugin architecture and groups to define the chains.

Redesign the web interface

This is just the placeholder issue for web interface redesign.

Reduce clicks and have a more natural flow ⇒ This refers to the fact that the current interface is not user-friendly, and can be modified to increase productivity while reducing redundancies and improving consistency.

  • The main OWTF report uses elements known as accordions to represent findings of the plugins. The accordion element is not scalable: it is not a user-friendly technique to represent hundreds of rows of plugins.
  • The goal here is to come up with an interface that is user-friendly, scalable, and maintainable.
    • CSS standards: Currently OWTF uses a heavy-CSS framework, Twitter Bootstrap to render elements. Twitter Bootstrap includes many unnecessary elements and is so heavy-handed, so prescriptive, that it is actually a nuisance to modify. It’s easier to use the defaults.
    • Also write everything in SASS (Syntactically Awesome Stylesheets), reducing redundancy and development time.and increasing consistency. SASS is very easy to write and is a more maintainable alternative.
    • Use compass (SASS -> CSS compiler) to compile everything (concatenate) the stylesheets and minify them whenever regenerating.

Removing Core access to Plugins

Most of our plugins use core to do one or more of the following things

  • Get information related to the target it is being run on
  • Get resources for that particular plugin
  • Get the plugin helper function that it uses

First two are easily available through the API and the plugin helper functions are something that only plugins need. So, I don't think there is a need to pass Core to our plugins. Instead write few methods that use our API and implement plugins using Classes (like what Tao tried a year back). This way we reduce a bit of coupling and distributed architecture will also work as we are using HTTP calls for getting required data.

Would love to brainstorm!

Reboot Procedure

Since the discussion about the reboot project is starting, we should agree on the procedure to follow from the issue page, to the code, stopping by the documentation.

I would like to propose the following:

  1. Start the discussion on GitHub Issue webapge (like we are currently doing)
  2. Once the discussion becomes mature (e.g. a precise idea) we label it as 'Wiki'
  3. A 'Wiki' issue is then reported to the Wiki page
  4. The revision is done in the issue labelled as 'Wiki'
  5. Once the revision is done, we label the issue as 'Code'
  6. A 'Code' issue is then reported to the code base

Putting /doc in the main repo

Though I wrote the complete user docs initially, it is difficult for most of the people to add changes to documentation in a different repo. So I think it is a better idea to put the user docs in the main repo where we can side by side update the docs when a relavent change is made.

Usage of Bower

I want to suggest usage of bower to manage our third party UI dependencies (Like JS MVC framework, CSS framework, Jquery, icons etc..) for the following reason

  • Bower will keep track of the version that we are using which will allow us to upgrade using single command. This will only be a developer dependency and not a OWTF dependency

I will die to have a python alternative to bower for this kind of handling. Please suggest!

OWTF entry points in /bin directory

The current OWTF project has a framework where all the code base is written and an owtf.py python file at the top to deal with it. The framework exposes the API for OWTF and owtf.py solely exposes the CLI options.

While this is what we should do for the reboot project as well, I think we should move owtf.py under a directory named /bin and split owtf.py into two scripts:

  • One exposing the WebUI only
  • One exposing the CLI only

While both scripts will consist of the same things (i.e. doing the same actions), it would make the parsing of the options easier, and reduce the maintenance cost if they were split IMO.

So we would have something like:

.
├── bin
│   ├── owtfcli.py
│   └── owtfwebui.py
└── framework

(Maybe I am overthinking that and we could keep the current organisation)

Usage of Backbone.js

Using MVC framework simplifies our interface management and will keep it clean. Other benefit is writing test suite for code written using these frameworks is also easy. Any alternatives will be well heard!

Why Backbone.js?

I found it really intuitive to work with.

Grab output from passive scans

@anantshri : For a large number of passive scan results we are providing a link what are the possible reason's we are not doing a scan and giving output in our own report.

I assume they might have some kind of rate limit for some sites but for a lot of others we should be able to get direct response or by leveraging something like Ghost.py a screenshot.

Will make the report a bit more comprehensive

UPDATE 1: @tunnelshade: I think screenshots can be the way to go because scraping becomes very difficult. And screenshots will help a lot especially when something like google search, it can be easily spotted if there are any results. I am afraid this might not be very easy for everything because sometimes we have to wait for sometime before taking a screenshot (when some online resource is used for scanning) but it is definitely worth doing.

UPDATE 2: @anantshri : exactly some might have anti automation etc. But whatever can work, taking a screenshot of the page would be simplest way to give actionable visual intel. It would not be of much use to machines as in parsable output but atleast before clicking on the link person would have an idea that this is what they can expect.

Proxy on Slaves

As we are planning for a distributed architecture, it is not really ideal to use only one proxy i.e on master. I suggest we put the proxy on the Slave and the Slave serves the bare minimum interface for the proxy features like interception. Once a plugin is done, or periodically all the HTTP transactions from slave are pushed to the master. This has some benefits

  • If user wants to intercept he can access the interceptor for a particular Slave.
  • We remove the network latency for HTTP requests between Master and Slave
  • The HTTP Request and processing load is distributed.

The disadvantages however are

  • HTTP Request cache is divided. (This can be avoided by sending particular target plugin runs to one particular slave)

CLI support

It is a necessity to support command line invocation let it testing or any other reason. So, instead of writing CLI support code directly, I recommend building a full fledged API and then translating CLI input to use that API. Any thoughts are welcome!

Enhance the proxy to be modular and have interception/replay capabilities

The current implementation of the mitm proxy serves its purpose very well. Its fast but its not extensible. There are a number of good usecases for being extensible

  • can be to intercept the transactions
  • modify on the fly or replay transactions
  • add additional capabilities to the proxy (such as session marking/changing) without polluting the main proxy code

What I propose is the proxy to be extensible. Plugins can be defined separately and the user can choose what plugins to include dynamically (from the web interface).

Thoughts and opinions?

Plugin files rearrangement and Classification

Rearrangement

I think it is better to use only one file for each plugin code i.e all the active, passive, semi passive, grep and external plugins for a single test code are present in same file. There are some really good benefits that are available through this.

For example, lets say we have our robots.txt plugin. Irrespective of the plugin type all the robots plugins should have the ability to parse a robots file (Semi Passive plugin makes a direct request while Passive plugin makes use of a service). So, we placed our RobotParser in plugin helper, but the reality being RobotParser is not useful to any other plugin. So, when placing all plugin types in one file, we can share the processing code between them.

Classification

Using only one Base class for all plugins is a better idea. Let me explain

  • One semi passive robots plugin runs a curl command to get the robots file.
  • Active plugins generally run a command of a certain tool

From a developer perspective there is no difference between these two. One thing can be that Grep plugins are different from everything else but someday in future we might want to suggest a command based on our finding in grep plugin. The idea of plugin current classification is based on their aggression of testing.

Headless Browser in requester

We have a simple requester as of now which is good for sending HTTP requests. It will be great to also have a headless browser factory inside the requester. The typical flow when requested for an authenticated browser instance

  • The "Requester" module checks if there is any login parameters provided (i.e creds or script or whatever)
  • Create a browser instance and do the necessary login stuff
  • Handle the browser to whosoever asked for it
  • When called to close the browser, do a clean logout and kill the browser instance.

The only small problem here is that the plugin using the browser instance should be careful of not logging out. To avoid this we can actually wrap certain methods of PhantomJS browser. What do you guys think?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.