Giter VIP home page Giter VIP logo

polar_tool's Introduction

POLAR Official version

POLAR [1] is a reachability analysis framework for neural-network controlled systems (NNCSs) based on polynomial arithmetic. Compared with existing arithmetic approaches that use standard Taylor models, our framework uses a novel approach to iteratively overapproximate the neuron output ranges layer-by-layer with a combination of Bernstein polynomial approximation for continuous activation functions and Taylor model arithmetic for the other operations. This approach can overcome the main drawback in the standard Taylor model arithmetic, i.e. its inability to handle functions that cannot be well approximated by Taylor polynomials, and significantly improve the accuracy and efficiency of reachable states computation for NNCSs. To further tighten the overapproximation, our method keeps the Taylor model remainders symbolic under the linear mappings when estimating the output range of a neural network.

Experiment results across a suite of benchmarks show that POLAR significantly outperforms the state-of-the-art techniques on both efficiency and tightness of reachable set estimation.

Tight Bounds for Bernstein Polynomial Approximation of ReLU

The following special treatment for Bernstein polynomial (BP) approximation on ReLU units was also implemented in the submission to the AINNCS category in ARCH-COMP 2022 in addition to the techniques described in [1].

Thanks to the characteristic of the ReLU activation function, we can directly obtain tight bounds for the BP approximation as shown in the following figure where the dashes lines represent upper and lower bounds of the BP approximation.

The Taylor model (TM) overapproximation $p(x)+I$ of $\text{ReLU}(x)$ is given by $p(x) = BP(x) - \frac{BP(0)}{2}$ and $I = [-\frac{BP(0)}{2}, \frac{BP(0)}{2}])$ where $BP(0)$ is the Bernstein polynomial $BP(x)$ evaluated at $x=0$. It can be shown that for $x \in [a, b]$ with $a < 0 < b$, the bounds of the interval remainder $I$ are tight for any order-k BP approximation with $k \geq 1$.

Installation

System Requirements

Ubuntu 18.04, MATLAB 2016a or later

Dependencies

POLAR relies on the Taylor model arithmetic library provided by Flow*. Please install Flow* with the same directory of POLAR. You can either use the following command or follow the manual of Flow* for installation.

  • Install dependencies through apt-get install
sudo apt-get install m4 libgmp3-dev libmpfr-dev libmpfr-doc libgsl-dev gsl-bin bison flex gnuplot-x11 libglpk-dev gcc-8 g++-8 libopenmpi-dev libpthread-stubs0-dev
  • Download Flow*
git clone https://github.com/chenxin415/flowstar.git
  • Compile Flow*
cd flowstar/flowstar-toolbox
make

Compile POLAR

cd POLAR
make

Example Usage

Example 1. Consider the following nolinear control system (benchmark 1):

formula

where formula is computed from a NN controller formula that has two hidden layers, twenty neurons in each layer, and ReLU and tanh as activation functions. Given a control stepsize formula, we hope to verify whether the system will reach formula from the initial set formula over the time interval formula.

Partial code of the benchmark 1 dynamics (reachnn_benchmark_1.cpp file) are shown as follows:

// Declaration of the state and input variables.
unsigned int numVars = 4;
Variables vars;

int x0_id = vars.declareVar("x0");
int x1_id = vars.declareVar("x1");
int t_id = vars.declareVar("t");
int u_id = vars.declareVar("u");

// Define the continuous dynamics.
ODE<Real> dynamics({"x1",
                    "u*x1^2-x0",
                    "1",
                    "0"}, vars);
...
// Define initial state set.
double w = stod(argv[1]);
Interval init_x0(0.85 - w, 0.85 + w), init_x1(0.55 - w, 0.55 + w), init_u(0); // w=0.05
...
// Define the neural network controller.
string net_name = argv[6];  // relu_tanh
string nn_name = "nn_1_"+net_name;
NeuralNetwork nn(nn_name);
...
// Order of Bernstein Polynomial
unsigned int bernstein_order = stoi(argv[3]); // 4
// Order of Taylor Model
unsigned int order = stoi(argv[4]); // 6
// Define the time interval
int steps = stoi(argv[2]);  // 35
...
// Define target set
vector<Constraint> targetSet;
Constraint c1("x0 - 0.2", vars);		// x0 <= 0.2
Constraint c2("-x0", vars);			// x0 >= 0
Constraint c3("x1 - 0.3", vars);		// x1 <= 0.3
Constraint c4("-x1 + 0.05", vars);		// x1 >= 0.05
...

The NN controller is specified in nn_1_relu_tanh file as follows

2 // number of inputs
1 // number of outputs
2 // number of hidden layers
20 // number of nodes in the first hidden layer
20 // number of nodes in the second hidden layer
ReLU // Activation function of the first layer
ReLU // Activation function of the second layer
tanh // Activation function of the output layer
// Weights of the first hidden layer
-0.0073867239989340305
...
0.014101211912930012
// Bias of the first hidden layer
-0.07480818033218384
...
0.29650038480758667
...
0 // Offset of the neural network
4 // Scala of the neural network

Then we can verify the NNCS with the following command under example1/benchmark1:

make reachnn_benchmark_1 && ./reachnn_benchmark_1 0.05 35 4 6 1 relu_tanh

where 0.05 is the width of the initial set, 35 is the total steps that need to be verified, 4 is the order of Bernstein Polynomial, 6 is the order of Taylor Model, 1 specifies option to use symbolic remainder and relu_tanh specifies the NN controller with ReLU and tanh activation functions which points to nn_1_relu_tanh file.

A bash script run.sh is also available for each benchmark to run POLAR.

The computed flowpipes are shown in the figure.

alt text

The output file of POLAR shows the verification results:

Verification result: Yes(35)  // Verification result at the 35th step
Running Time: 11.000000 seconds // Total computation time in seconds

Here, "Yes" means that the target set contains the overapproximation of the reachable set. In other words, every trajectory of the system is guaranteed to reach the target set at time T. If the result returns "No", it means that the target set and the overapproximation of the reachable set are mutually exclusive. Every trajectory of the system will fall outside of the target set. "Unknown" means that the target set intersects with the overapproximation of the reachable set. It is unknown whether every system trajectory will fall inside the target set.

Contributors

Chao Huang, Jiameng Fan, Xin Chen, Zhilu Wang, Yixuan Wang, Weichao Zhou, Wenchao Li, Qi Zhu

References

[1] Yixuan Wang, Weichao Zhou, Jiameng Fan, Zhilu Wang, Jiajun Li, Xin Chen, Chao Huang, Wenchao Li and Qi Zhu. POLAR-Express: Efficient and Precise Formal Reachability Analysis of Neural-Network Controlled Systems, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2023).

[2] Chao Huang, Jiameng Fan, Xin Chen, Wenchao Li and Qi Zhu. POLAR: A Polynomial Arithmetic Framework for Verifying Neural-Network Controlled Systems, Proceedings of the 20th International Symposium on Automated Technology for Verification and Analysis (ATVA 2022).

polar_tool's People

Contributors

chaohuang2018 avatar jliatliverpool avatar jmfanbu avatar langkymus avatar wangyixu14 avatar zhiluwang avatar zwc662 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

polar_tool's Issues

Potential mistake in your read.me

Thanks for sharing your great work! I have one question about how to input weights and biases of NN into POLAR. In your neural_network.py, it seems like you read the weights and biases for each neuron like: w,w,b,w,w,b... not w,w,w,w,w,...,b,b,b,.... Thanks for your time.

About network saving code

It seems that the output size of each layer is saved twice here. Is this intentional?

for i in range(len(self.weights)-1):
nnfile.write(str(self.weights[i].shape[1])+"\n")
for i in range(len(self.weights)-1):
nnfile.write(str(self.weights[i].shape[1])+"\n")

Also, some minor bugs at per-layer weight-saving code ('==' and 'for ... :')

if (self._type = 'Fully_connected'):
for i in range(self._weight.shape[1]):
for j in range(self._weight.shape[0])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.