Giter VIP home page Giter VIP logo

contact-nets's Introduction

Deprecation Notice

The second generation of ContactNets' software is being developed as part of the DAIR Lab's Physics-based Learning Library.

ContactNets

This repository contains source code for the paper ContactNets: Learning Discontinuous Contact Dynamics with Smooth, Implicit Representations by Samuel Pfrommer*, Mathew Halm*, and Michael Posa, published in CoRL 2020.

Attribution notes

The osqpth and lemkelcp libraries found in lib are not our own and are protected by the Apache Software License and MIT licenses, respectively. fast-nnls is also not our own and has no specified license. The file contactnets/utils/quaternion.py contains code extended from the Facebook research QuaterNet project, protected under Creative Commons Attribution-NonCommercial license. References to these projects are found below.

While all the code in contactnets is relevant to the presented results, the specifics of the loss formulation in equations 16-17 is outlined in contactnets/train/polyground3dsurrogate.py, and the ContactNets network architectures are formed in contactnets/experiments/block3d/structured_learnable.py.

Setup

Requirements

  • Python 3.6.9 or higher, although this hasn't been rigorously tested. Development was with Python 3.7.5. Running python --version and pip --version in the command line both should satisfy this requirement and correspond to the same python install.
  • 16GB RAM
  • Linux (tested on Ubuntu 18.04 LTS)

No GPU is required, although it can provide some speedup for deep structured or end to end methods. These instructions were tested on a fresh no-GPU Google Cloud Deep Learning VM instance.

Dependencies

Due to the large number of dependencies, installation instructions are written for install in a virtual environment:

python -m venv cnets_env
source ./cnets_env/bin/activate

The venv must be re-sourced after reboots. Once in the environment, all python prereqs and local code can be installed by running

chmod u+x ./setup.sh
./setup.sh

Additionally, the following linux packages must be installed:

sudo apt-get install freeglut3-dev psmisc

If you're getting errors relating to "no such file or directory 'tensorboard'" the tensorboard command might not be in your path. Check by just running tensorboard in the terminal. If the command can't be found, you might have to add a symbolic link to /usr/bin:

sudo ln -s ~/.local/bin/tensorboard /usr/bin

If you get grpcio errors you might also need to run:

pip install --upgrade grpcio

Generating Figure 1

The code used to generate Figure 1 from the paper can generated by entering the demo directory and running

python figure1.py

The figure will be generated as PM_config.png and PM_loss.png.

Executing ContactNets

After installing the above dependencies, execute

python demo/experiment.py --method e2e --tosses 100

method can be chosen to be one of either e2e, poly, deep, or deepvertex, and tosses represents the number of real-world training tosses to use (x-axis in Figure 5 of the accompanying submission). The first three methods are evaluated in the ContactNets paper, and the last is an experimental approach that learns vertex positions as a deep network. In order to verify that the install is working correctly it might be helpful to first run the e2e method with tosses=1.

You can view the training process on tensorboard at localhost:6006. The images tab contains a log-scaled plot of various losses / regularizers, rendering rollouts of a subset of the tosses, and for ContactNets methods renderings of the learned phi functions over configuration space; here theta represents the angle of rotation around the y-axis. All g_normal_x/y/z plots render a projection of ContactNets vertex positions for phi_n onto the corresponding axis. Units are scaled such that with enough training samples, the corner positions should eventually converge to (1,1,1),(1,1,-1),..., even with as few as 30 training tosses. h_tangent_x/y/z represents the same idea for phi_t, and generally needs 60-100 training tosses to converge nicely, although due to friction irregularities will not converge as precisely as the normal component.

Different losses are plotted in the Custom Scalars tab. Trajectory position integral error corresponds to plot 5a, trajectory angle integral error corresponds to plot 5b, and trajectory penetration integral corresponds to plot 5c. For ContactNets methods Surrogate losses refer to the loss in equations 16-17, while for e2e the vel_basic losses refer to the equation 21. After patience epochs have passed without an improvement on the validation loss, summary statistics are outputted to out/best/stats.json.

Note that you may occasionally get ALSA lib underrun errors in stdout. These are harmless.

Executing other experiments

The above method is the easiest way to run an experiment. For more fine tuned control, you can go into any one of the experiment directories in contactnets/experiments and use the following procedure.

  • Run gen.py to put simulated data in out/data/all. Start with the arguments --runs 3 --steps 5 to see if the process works.
  • Go up one level and run split.py in experiments. This will split data from out/data/all to the out/data/train, out/data/valid, and out/data/test directories.
  • Run train.py in the original directory for the particular experiment. Open up the training script to modify the training parameters.

You can find out more about any of these commands by adding the --help flag.

Real-world data. Note that for the 3D block example, you can either generate synthetic data as described above or use real-world data. This data is stored pre-processed in data/tosses_processed; each file corresponds to one extracted toss from the dataset. As shown in contactnets/experiment.py, you can copy this data to out/data/all and also copy data/params_processed to out/params instead of running gen.py. Then run split.py and proceed like normal.

This data has already processed from raw odometry readings, and bad tosses have been thrown out. For those interested in the data for other means, the format is: position (3), quaternion (4), velocity (3), angular velocity (3), control (6), where numbers indicate the number of columns per row. Position and velocity are scaled by 1 / BLOCK_HALF_WIDTH, where BLOCK_HALF_WIDTH=0.0524m. This was done so that at rest the cube height is at 1, but to get back into meters just multiply positions and velocities by .0524. Quaternions are stored real part first. Finally, control is just full of zeros since each toss starts after release.

Headless server

If you are running this on a headless linux server, you will need to install the xvfb package and then execute the following commands before running experiment.py: Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5

Running tests

The test directory provides tests for JacProp as well as integration tests for all experiments. The JacProp tests compare the explicitely computed jacobian functions against autograd-based jacobians, which are much slower. The integration tests cover generating, splitting, and training all methods for each experiment. Integration tests will take a few minutes to run.

Linting, type checking, and import sorting

The codebase passes linting and is annotated to pass static type checking tests. To lint, run flake8 contactnets from the root directory. For static type checking, run mypy contactnets. To sort all imports recursively, run isort contactnets. These should all use the config files set up in the root directory. It is also possible to set up your editor to run flake8 and mypy. Syntastic is one good option for vim.

Architecture

Simulation

An Entity represents something in your environment, whether it is a polytope object, ground, or point mass. They keep track of a history of configurations, velocities, and control impulses for that body. Something without a real state, like a ground entity, maintains state vectors of length zero. Anything inheriting from entity must specify the dimensions of its configuration / velocity, as well as implement methods specifying its mass matrix, gamma, and free evolution dynamics.

A set of entities are related by an Interaction. Right now, interactions only act on pairs of entities. Something extending Interaction must implement methods for computing phi, Jn, phi_t, Jt_tilde, and k, where k is the number of elements in phi (for current approaches, interpretable as the number of vertices). There are a few interaction implementations currently, including polygon-ground 2D/3D interactions. These are hard-coded and are mainly used for simulation. For learning, each experiment generally has some kind of "learnable" interaction, which subclasses one of the basic ones but adds trainable parameters. For example, you could have a learnable poly-ground interaction which subclasses the basic poly-ground interaction, retains its tangent jacobian calculations, but uses a deep network to compute phi and Jn.

All interactions are managed by a single InteractionResolver. This object resolves all interactions in the environment for each step and computes the next states of all entities registered to it. Currently implemented resolvers are a LCP Stewart-Trinkle based resolver, an elastic Anitescu-Potra resolver, and a resolver for DirectInteractions, which don't learn any special parameterization but instead directly learn forces between objects (these are what we refer to as end-to-end methods). An important thing to note is that there should not be any trainable parameters introduced at the resolver level. Interaction and Entity instances should be the only things containing learnable parameters.

Finally, we have a System which ties the above components together. A System consists of a single resolver and a list of entities. Systems subclass PyTorch modules, so this system inherets the parameters of all the entities as well as the interactions (which first get inhereted by the resolver). The System simulates a rollout of a model by taking as input a list of list of controls, with each entity getting one control per time step. The system will then use the resolver to compute inter-body impulses and allow each entity to compute its own dynamics step, combining the control variable (which was assigned to its history) and the resolved collision impulses.

Training

As hinted at above, the System is now a PyTorch module with all the trainable parameters of the system, whether those be in interactions or entities. Parameters can be added to either, depending on the needs of the researcher; parameters relating to entities will simply "shared" between its interactions (e.g., mass or inertia).

A Loss operates on a system to produce a scalar loss, working from the current configuration / velocity / control histories stored in the entities. Losses can either operate stepwise or trajectorywise; if they are stepwise, they expect to compute a loss over a system with a history of length two. Additionally, some losses are potentially allowed to mutate the system, which is only allowed for reporting losses or a single training loss (described later).

A LossManager manages a list of training losses and reporting losses. Training losses are what are actually used to compute the gradient, and must be non-mutating if there's more than one since all losses are evaluated over a single system state. Reporting losses are things like trajectory error / l2 cost which are outputted as metrics but do not need to provide gradients; these can mutate the system if necessary.

The Trainer class manages the training process. It has as member variables a LossManager, DataManager, and TensorboardManager, which more or less do what you would expect. It also has a few callbacks that allow modification of the gradients before and after each traning step if required by certain methods.

contact-nets's People

Contributors

spfrommer avatar mmousaei avatar mshalm avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.