Giter VIP home page Giter VIP logo

pyrobolearn's Introduction

PyRoboLearn

This repository contains the code for the PyRoboLearn (PRL) framework: a Python framework for Robot Learning. This framework revolves mainly around 7 axes: simulators, worlds, robots, interfaces, learning tasks (= environment and policy), learning models, and learning algorithms.

Warning: The development of this framework is ongoing, and thus some substantial changes might occur. Sorry for the inconvenience.

Requirements

The framework has been tested with Python 2.7, 3.5 and 3.6, on Ubuntu 16.04 and 18.04. The installation on other OS is experimental.

Installation

There are two ways to install the framework:

  1. using a virtual environment and pip
  2. using a Docker

Virtualenv & Pip

1. First download the pip Python package manager and create a virtual environment for Python as described in the following link: https://packaging.python.org/guides/installing-using-pip-and-virtualenv/ On Ubuntu, you can install pip and virtualenv by typing in the terminal:

  • In Python 2.7:
sudo apt install python-pip
sudo pip install virtualenv
  • In Python 3.5:
sudo apt install python3-pip
sudo pip install virtualenv

You can then create the virtual environment by typing:

virtualenv -p /usr/bin/python<version> <virtualenv_name>
# activate the virtual environment
source <virtualenv_name>/bin/activate

where <version> is the python version you want to use (select between 2.7 or 3.5), and <virtualenv_name> is a name of your choice for the virtual environment. For instance, it can be py2.7 or py3.5.

To deactivate the virtual environment, just type:

deactivate
  1. clone this repository and install the requirements by executing the setup.py

In Python 2.7:

git clone https://github.com/robotlearn/pyrobolearn
cd pyrobolearn
pip install numpy cython
pip install http://github.com/cornellius-gp/gpytorch/archive/alpha.zip  # this is for Python 2.7
pip install -e .  # this will install pyrobolearn as well as the required packages (so no need for: pip install -r requirements.txt)

In Python 3.5:

git clone https://github.com/robotlearn/pyrobolearn
cd pyrobolearn
pip install numpy cython
pip install gpytorch  # this is for Python 3.5
pip install -e .  # this will install pyrobolearn as well as the required packages (so no need for: pip install -r requirements.txt)

Depending on your computer configuration and the python version you use, you might need to install also the following packages through apt-get:

sudo apt install python-tk  # if python 2.7
sudo apt install python3-tk  # if python 3.5

Docker

At the moment the docker is a self contained Ubuntu image with all the libraries installed. When launched we have access to a Python3.6 interpreter and we can import pyrobolearn directly. In the future, ROS may be splitted in another container and linked to this one.

  1. Install Docker and nvidia-docker
sudo apt-get update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable # you should replace bionic by your version
sudo apt update
sudo apt install docker-ce
sudo systemctl status docker # check that docker is active
  1. Build the image
docker build -t pyrobolearn .
  1. Launch

You can now start the python interpreter with every library already installed

docker run -p 11311:11311 -v $PWD/dev:/pyrobolearn/dev/:rw -ti pyrobolearn python3

To open an interactive terminal in the docker image use:

docker run -p 11311:11311 -v $PWD/dev:/pyrobolearn/dev/:rw -ti pyrobolearn /bin/bash

4. nvidia-docker if the GPU is not recognized in the interpreter, you can install nvidia-docker

curl -sL https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -sL https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install nvidia-docker2
sudo pkill -SIGHUP dockerd

And use:

nvidia-docker run -p 11311:11311 -v $PWD/dev:/pyrobolearn/dev/:rw -ti pyrobolearn

Other Operating Systems

Note that some interfaces (like game controllers, depth camera, etc) might not be available on other OS, however the main robotic framework should work.

  1. Windows: You will have to install first PyBullet and NLopt beforehand.

For nlopt, install first conda, then type:

conda install -c conda-forge nlopt

If Pybullet doesn't install on Windows (using visual studio), you might have to copy rc.exe and rc.dll from

C:\Program Files (x86)\Windows Kits\10\bin\<xx.x.xxxx.x>\x64

to

C:\Program Files (x86)\Windows Kits\10\bin\x86

And add the last folder to the Windows environment path (Go to System Properties > Advanced > Environment Variables > Path > Edit).

Finally, remove the nlopt package from the requirements.txt. The rest of the installation should be straightforward.

2. Mac OSX: We managed to install the PyRoboLearn framework on MacOSX (Mojave) by following the procedures explained in the section "Virtualenv & Pip". You can replace the sudo apt install by brew install (after installing Homebrew).

How to use it?

Check the README.rst file in the examples folder.

License

PyRoboLearn is currently released under the GNU GPLv3 license.

Citation

For how to cite this repository, please refer to the CITATION.rst file.

If you use a specific learning model, algorithm, robot, controller, and so on, please cite the corresponding paper. The reference(s) can usually be found in the class documentation (at the end), and sometimes in the README file in the corresponding folder.

Acknowledgements

Currently, we mainly use the PyBullet simulator.

  • PyBullet, a Python module for physics simulation for games, robotics and machine learning, Erwin Coumans and Yunfei Bai, 2016-2019
  • References for each robot, model, and others can be found in the corresponding class documentation
  • Locomotion controllers were provided by Songyan Xin
  • We thanks Daniele Bonatto for providing the Docker file, and test the installation on Windows.

pyrobolearn's People

Contributors

bdelhaisse avatar dbonattoj avatar lrozo avatar lukasfro avatar noemiejaquier avatar tflqw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyrobolearn's Issues

Problems encountered in training GMM models

When I run gmr.py and gmr_letter.py, the initialization of GMM is different, I don't understand the initialization of gaussians= (mean,cov);
gmm = GMM(gaussians=[Gaussian(mean=np.random.uniform(-1., 1., size=dim),
covariance=0.1np.identity(dim)) for _ in range(num_components)])
gmm = GMM(gaussians=[Gaussian(mean=np.concatenate((np.random.uniform(0, 2., size=1),
np.random.uniform(-8., 8., size=dim-1))),
covariance=0.1
np.identity(dim)) for _ in range(num_components)])

When I tried to train the position data of the end-effector with gmm, the output result was completely wrong. If possible, I hope you can write an example, thank you!

globally including non-mandatory libraries

It seems that almost all dependencies, no matter if truly required or not, get included globally.

Example:
For instance, raisimLib should not be required to run examples that do not depend on and do not use raisim. However, when it is not installed on a user-machine, pyrobolearn immediately throws an error like: current_dir=/home/.../.local/lib/python2.7/site-packages/pybullet_envs/bullet No module named raisimpy HINT: you need to install raisimLibandraisimOgre, and build the Python wrappers that are located in the raisim_wrapper folder.

Possible solutions:

  • restructure include files
  • install all dependencies automatically via script

Thanks in advance, and best regards!

Changes in Raisim installation script

  1. This line throws an error:
    Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libeigen-dev
    Do we need to install libeigen3-dev instead?

  2. Line 45 executes cmake in a folder that was just created without any CMakeLists.txt. Do we need first to git-clone something here?

I did not keep executing lines as these two issues arised.

Please create humanoid IK tutorials such as robot standing or robot horizontal walking

Hi, thanks you for your effort to making good implementations of pyRobolearn.
I would like to know the usage of humanoid IK(for example, robot standing or robot horizontal walking).
I try to load humanoid and apply inverse kinematics code which you publish on github, but I got some errors and did not work well.
Please make some tutorials about humanoid version!!

Add default directories to install scripts (improvement)

Some install scripts ask the user for a path where to install a certain package, although it is in fact not required, as the packages are being built and subsequently globally installed to /usr/local* anway.
As a user-friendly improvement, you can set a default download directory, e.g. to something like /tmp, which gets routinely deleted at startup!

Also, there is an inconsistency w.r.t. the install location in the scripts.
Some scripts automatically install libraries and headers to /usr/local*, otheres explicitly ask for setting the $CMAKE_PREFIX_PATH path implicitly via $LOCAL_BUILD
I'd suggest to replace LOCAL_BUILD entirely by CMAKE_PREFIX_PATH since it makes it clear to the user what the variable means, and default the latter to /usr/local*

Reaching environment is not showing any change in response to action

I am testing it using the following code in reaching.py

# Test
if __name__ == "__main__":
    from itertools import count
    import pyrobolearn as prl

    # create simulator
    sim = prl.simulators.Bullet()

    # create environment
    env = ReachingManipulationEnv(sim, control_mode = 'torque')
    action = env.action
    action.data = np.array([1,0,0,20,0,0,0])
    print("\nAction: {}".format(action))
    # run simulation
    for _ in count():
        # print(action)
        env.step(action, sleep_dt=1./240)

Please guide me on what I am missing here.
Thanks.

Pleurobot end effector position

Hello,
Thanks for the repo, it is fantastic what you have done here ! I come from UCL in Belgium and try to use Pyrobolearn for the first time for a course : the objective is to simulate just one leg of Pleurobot (imposed by our teacher) with a custom path.

It is very easy to compute the path we want and looking to the example of invert kinematics, we tried to make one leg following our path. But it isn't working very well !

Entire code is findable here : https://github.com/nicolascolsoul/biorobotics/blob/master/pleurobot_lmeca.py
But it looks very similar of the invert kinematics example except this line,
link_id = robot.get_end_effector_ids(end_effector=5)
To choose the right rear leg. But the end effector isn't following our curve and I don't understand.
Is the end effector (in Pleurobot model ?) the last joint or the output of the last joint ?
90525681_3673915489346970_5568813085134684160_n
I want to the A point follow my path but it seems that it is the B point.

How can we achieve that ? I'm not very skilled in robotics/simulation so, if this question looks stupid, please tell me. Thank you so much.

Downsample meshes? (Suggestion)

Hi, and first and foremost, thanks for the great work!
I checked out the repo and noted that the robots/urdf* folder is roughly 900 MB. This seems to be the case because of extremely high-resolution meshes.
Would it be an option to downsample the meshes a bit? For users, that would mean a more lean repo, faster collision checking in classical motion planning, faster rendering in visualization, etc.!

Changes in dart installation script.

  1. If the user is behind a proxy/firewall, this line may throw an error:
    Cloning into 'dart'... fatal: unable to connect to github.com: github.com: Name or service not known
    Better try cloning via https whenever possible. This also applies to other installation scripts using the same git-clone as this.

  2. Typing make -j4 dartpy throws the error:
    make: *** No rule to make target 'dartpy'. Stop.
    I guess it is first needed to move to the python folder and then run the line above.

Possible error when adding linear and angular velocities (robot.py)

File: robot.py
Function: get_link_world_velocities(self, link_ids=None, flatten=True):
Issue: When adding np.array(lin_vel + ang_vel) in line 1407, this "addition" operation may, in fact, add both velocity vectors instead of concatenating (which basically depends on the type of data structure returned by the simulator, e.g. list or array).
Notes: This problem may replicate in line 1413

Issues with Mujoco installation script

  1. When running/using Mujoco, we need to first install patchelf:
    sudo apt install patchelf

  2. We also need to add these dependencies:
    pip install pymesh pyassimp

  3. When running examples where Mujoco is the simulator (from Pycharm), I got this error:
    Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

Difference between Pybullet and Pyrobolearn simulator

I have recently started using PyRoboLearn. I used to load collision shape in Pybullet as:

import pybullet as p
p.createVisualShape(shape, radius=radius, length=length, rgbaColor=[0.8, 0.6, 0.4, 1], specularColor=specular_color, visualFramePosition=position_offset, visualFrameOrientation=orientation)

It has many arguments that are not available in PyRoboLearn. It only provides a few arguments.

sim = Bullet()
create_visual_shape(sim.GEOM_CAPSULE, radius=radius, length=height/2., rgba_color=color)

How can I use other arguments in PyRoboLearn which are available in PyBullet?

Some problem in the pyrobolearn/kinematics/inverse/moving_sphere.py

I have try the track the orientation of the target sphere not just the position.
dv = kp_x * (sphere.position - x) + kd_x * dx I try to use this function to get the position control. It has a good performance. but when I add the orientation tracking using dw=kp_o* error_quaternion(sphere.orientation, quaternion_endeffector) + kd_o * do_d[0:3]
dq = Jp.dot(np.hstack((dv, dw))) but the performance of the tracking is not good.
the error_quaternion is
def error_quaternion(a, b):
q = a.copy()
p = b.copy()
x = q[0] * p[1] - q[1] * p[0] - q[2] * p[3] + q[3] * p[2]
y = q[0] * p[2] + q[1] * p[3] - q[2] * p[0] - q[3] * p[1]
z = q[0] * p[3] - q[1] * p[2] + q[2] * p[1] - q[3] * p[0]
return np.array([x, y, z])
do is error_quaternion_q(quaternion_endeffector, sphere.orientation) / dt
error_quaternion_q is
def error_quaternion_q(a, b):
q = a.copy()
p = b.copy()
w = q[0] * p[0] - q[1] * p[1] - q[2] * p[2] - q[3] * p[3]
x = q[0] * p[1] - q[1] * p[0] - q[2] * p[3] + q[3] * p[2]
y = q[0] * p[2] + q[1] * p[3] - q[2] * p[0] - q[3] * p[1]
z = q[0] * p[3] - q[1] * p[2] + q[2] * p[1] - q[3] * p[0]
return np.array([x, y, z, w])
prove

Error when accessing cached joint acceleration

Calling the get_joint_acceleration() method in robots/robot.py leads to an error when called twice due two an indexing error for the cached values here. Easy fix for this issue is to replace this line by ddq = self._state['ddq'][0]. Happy to do a PR :)

Execution failure due to Non-ASCII characters (Bug)

When trying to run any example, for example:

pyrobolearn_ws/pyrobolearn/examples/kinematics/forward$ python fk.py pybullet build time: Sep 25 2019 10:28:00 current_dir .../.virtualenvs/venv_py2/local/lib/python2.7/site-packages/pybullet_envs/bullet Traceback (most recent call last): File "fk.py", line 11, in <module> from pyrobolearn.simulators import Bullet File "/packages/pyrobolearn/pyrobolearn/__init__.py", line 23, in <module> from . import simulators File "/packages/pyrobolearn/pyrobolearn/simulators/__init__.py", line 35, in <module> from .raisim import Raisim File "/packages/pyrobolearn/pyrobolearn/simulators/raisim.py", line 31 SyntaxError: Non-ASCII character '\xe2' in file /home/gim2rng/packages/pyrobolearn/pyrobolearn/simulators/raisim.py on line 32, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
I started tracing back the non-readable characters, however they are numerous. Possibly introduced by editing the documentation sections in an inappropriate text-editor.

yaw torque in quadcopter off by 1000x

I've been testing the quadcopter model as a backend for ArduPilot. I do have it flying now, but I had to change the mass of the propellers to be 1000x smaller than the current value in the urdf. The default mass of 0.005 seems reasonable, but I needed to change it to be 0.000005 to get reasonable yaw control. There is some gross mis-match between thrust and yaw torque on the propellers.
I suspect it is related to thrust for gazebo normally being on -1 to 1 scale, whereas for this model it is 0 to 1000 scale.

Force control issue

Hi! Thanks for your contribution~
Recently I have try to exploit some code about force control using UR in your simulation framework. But I found the detected force is too high to tracking the reference force, could you attach some force control code using robot arm like kuka or other robots to be referred.

Gravity compensation example not working properly

The example (examples/dynamics/control/gravity_compensation.py) given for gravity compensation is not working properly. The robot should stay at rest but it's moving continuously. Attaching a video for reference. Can somebody help with this issue? Thanks.

ur5-gc-improper

Error in /examples/imitation/trajectory_reproduction_kuka_dmp.py

When I run this file, first I record the trajectory using ctrl+r and then by pressing shift+r, it fails to execute and gives the following error:

TypeError: Expecting an instance of `Action`, np.array, or a list of the previous ones, but got instead: <class 'numpy.ndarray'>

Please help.

Bug of sim.create_body and sim.create_visual_shape

When I try to change the scale of the mesh (ellipsoid.obj), it is not working actually. I think is these two functions problem. when I use the function of pybullet, p.createVisualShape and p.createMultiBody is working.

Error in force control task

When I run Force_control_example.py, robot initially runs for few iterations, while showing end-effector forces plot. After some time, robot stops, a new black window appears and terminal shows following error:

-3.9336459877590596e-05
-1.80323431312297e-05
5.969657570434706e-06
2.6458507806909837e-05
4.270999884258811e-05
5.0823149630967465e-05
5.153522232988297e-05
4.396127642365691e-05
3.070630143922923e-05
1.3182937457178667e-05
-5.113344854462901e-06
-2.2079177359950246e-05
-3.482887550450532e-05
WARNING: QApplication was not created in the main() thread.
Exception in thread manipulator task:
Traceback (most recent call last):
  File "/home/deepakraina/anaconda3/lib/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/home/deepakraina/anaconda3/lib/python3.7/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "force_control/Force_control_example.py", line 191, in manipulator_thread
    plt.show()
  File "/home/deepakraina/anaconda3/lib/python3.7/site-packages/matplotlib/pyplot.py", line 269, in show
    return _show(*args, **kw)
  File "/home/deepakraina/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py", line 413, in wrapper
    return func(*args, **kwargs)
  File "/home/deepakraina/anaconda3/lib/python3.7/site-packages/matplotlib/backend_bases.py", line 3302, in show
    cls.mainloop()
  File "/home/deepakraina/anaconda3/lib/python3.7/site-packages/matplotlib/backends/backend_qt5.py", line 1094, in mainloop
    signal.signal(signal.SIGINT, signal.SIG_DFL)
  File "/home/deepakraina/anaconda3/lib/python3.7/signal.py", line 47, in signal
    handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread

Please help with this. Thanks

Detaching two bodies does not work as intended

When detaching two bodies, the corresponding world.detach() method does not work as documented.
In particular, consider the following three cases in the code example at the bottom (from worlds/samples/sports.basketball.py).

  1. world.detach(body1=robot.id, body2=world.ball, link1=robot.end_effectors[0], link2=-1): This results in a KeyError, as the constraint dictionary is not properly addressed here. Easy fix, replace the indexing by self.constraints[(body1, body2)][(link1, link2)]
  2. world.detach(body1=robot, body2=world.ball): This does not throw any error but fails silently. The reason is that only body IDs and not the Body instances are checked for being in the constraints here. Two fixes: either check if body1/2 instances of Body and get their corresponding IDs, or change the docstring that this method only works for IDs.
  3. world.detach(body1=robot.id, body2=world.ball): Just specifying the body IDs and no link IDs works just fine :)

Happy to do a PR fixing this issue.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
r"""Provide the basketball world.
"""

import os
import numpy as np

from pyrobolearn.worlds import BasicWorld


__author__ = "Brian Delhaisse"
__copyright__ = "Copyright 2019, PyRoboLearn"
__credits__ = ["Brian Delhaisse"]
__license__ = "GNU GPLv3"
__version__ = "1.0.0"
__maintainer__ = "Brian Delhaisse"
__email__ = "[email protected]"
__status__ = "Development"


# TODO: finish to implement the world, create corresponding environment (in `envs` folder) with state and reward.

class BasketBallWorld(BasicWorld):
    r"""Basketball world

    """

    def __init__(self, simulator, position=(3., 0., 2.), scale=(1., 1., 1.)):
        """
        Initialize the basketball world.

        Args:
            simulator (Simulator): the simulator instance.
            position (tuple/list of 3 float, np.array[3]): position of the basketball hoop.
            scale (tuple/list of 3 float): scale of the basket hoop.
        """
        super(BasketBallWorld, self).__init__(simulator)

        mesh_path = os.path.dirname(os.path.abspath(__file__)) + '/../../meshes/sports/basketball/'
        position = np.asarray(position)

        # load basket hoop position
        self.hoop = self.load_mesh(mesh_path + 'basketball.obj', position=position, scale=scale, mass=0, flags=1)

        # load ball
        # self.ball = self.load_sphere(position=[0., 0., 1.], radius=0.1193, mass=0.625)
        # self.apply_texture(texture=mesh_path + 'Basketball-ColorMap.jpg', body_id=self.ball)
        self.ball = self.load_mesh(mesh_path + 'ball.obj', position=[0., 0., 2.], scale=(1., 1., 1.), mass=0.625,
                                   flags=0)
        self.ball_radius = 0.1193

        # set the restitution coefficient for the ball
        # Ref: "Measure the coefficient of restitution for sports balls", Persson, 2012
        self.change_dynamics(self.ball, restitution=0.87)
        self.change_dynamics(restitution=1.)

    def reset(self, world_state=None):
        super(BasketBallWorld, self).reset(world_state)

    def step(self, sleep_dt=None):
        super(BasketBallWorld, self).step(sleep_dt)


# Test
if __name__ == '__main__':
    from itertools import count
    import pyrobolearn as prl

    # create simulator
    sim = prl.simulators.Bullet()

    # create world
    world = BasketBallWorld(sim)

    # create manipulator
    robot = world.load_robot('kuka_iiwa')

    # attach ball to robot end effector
    world.attach(body1=robot, body2=world.ball, link1=robot.end_effectors[0], link2=-1, joint_axis=[0., 0., 0.],
                 parent_frame_position=[0., 0., world.ball_radius], child_frame_position=[0., 0., 0.],
                 parent_frame_orientation=[0, 0., 0., 1.])

    # run simulation
    for t in count():
        if t % 100 == 0:
            print(f"Timestep {t}")
        if t == 500:
            # Using PyBullet

            # works after fixing (see comment above): specifying both body IDs and link IDs
            # world.detach(body1=robot.id, body2=world.ball, link1=robot.end_effectors[0], link2=-1)

            # does not work: specifying body1 as object and body2 by ID
            # world.detach(body1=robot, body2=world.ball)

            # works: specifying just body IDs
            world.detach(body1=robot.id, body2=world.ball)

        world.step(sim.dt)

Examples Sensors

Hello! I need to use two camera sensors but there aren't examples of how to put it in the simulation. I suggest creating a camera like a robot to load easily "world_load_camera" and the same thing to other sensors like IMU. Thanks a lot!

Not able to replicate the force control task with UR5 robot

I have seen Force_control_example.py with the Kuka-IIWa robot, which is working perfectly fine. But then I tried to implement the same with the UR5 robot, then it is not able to do the force tracking. Can you please let me know, what I am doing wrong here. Is it an issue with M/D/K parameter tunning?

Code:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
The task is to track the force along z axis (vertical to the table) by employing admittance control, meanwhile tracking
a circle trajectory on the xy plane. And the end-effector's target position is visualized by a sphere.
Reference:
[1] SONG, Peng; YU, Yueqing; ZHANG, Xuping. A tutorial survey and comparison of impedance control on robotic manipulation
. Robotica, 2019, 37.5: 801-836.
"""

import numpy as np
from itertools import count

from pyrobolearn.simulators import Bullet
from pyrobolearn.worlds import BasicWorld
from pyrobolearn.robots import Body, sensors, UR10, UR5
from pyrobolearn.utils.transformation import *

from pyrobolearn.utils.plotting.end_effector_realtime_FT_plot import EeFtRealTimePlot

from threading import Thread

import matplotlib.pyplot as plt
import matplotlib
matplotlib.use('TkAgg')


# Real-time plot the End-effector force and torque
def plotting_thread(plot):
    if not isinstance(plot, EeFtRealTimePlot):
        raise TypeError("Expecting to plot type is CartesianRealTimePlot, not ""{}".format(plot))
    while True:
        plot.update()

# Manipulate the whole process
# The sphere is used to visualize the reference trajectory, So I creat the sphere trajectory as the reference
def manipulator_thread(world, robot, sphere, FT_sensor):
    """
    First step: is to arrive the initial position
    """
    for t in count():
        # move sphere
        # sphere.position = np.array([0.36, 0, 0.8])
        sphere.position = np.array([0.5, 0.1, 0.94])

        # get current end-effector position and velocity in the task/operational space
        x = robot.get_link_world_positions(link_id)
        dx = robot.get_link_world_linear_velocities(link_id)
        o = robot.get_link_world_orientations(link_id)
        do = robot.get_link_world_angular_velocities(link_id)

        # Get joint positions
        q = robot.get_joint_positions()

        # Get linear jacobian
        if robot.has_floating_base():
            J = robot.get_jacobian(link_id, q=q)[:, qIdx + 6]
        else:
            J = robot.get_jacobian(link_id, q=q)[:, qIdx]

        # Pseudo-inverse: \hat{J} = J^T (JJ^T + k^2 I)^{-1}
        Jp = robot.get_damped_least_squares_inverse(J, damping)

        dv = kp * (sphere.position - x) - kd * dx
        dw = kp * quaternion_error(sphere.orientation, o) - kd * do
        # evaluate damped-least-squares IK
        dq = Jp.dot(np.hstack((dv, dw)))

        # set joint positions
        q = q[qIdx] + dq * dt
        robot.set_joint_positions(q, joint_ids=joint_ids)
        if t > 1000:
            break
        # step in simulation
        world.step(sleep_dt=dt)
    """
        Second step: From the initial pose, Move vertically downward 
        until end-effector touches the desktop with a force of 10N 
    """
    for t in count():
        Fz_desired = 5  # the threshold of the contact force with table
        # move sphere
        sphere.position = np.array([0.5, 0.1, 0.94-0.0005*t])

        # get current end-effector position and velocity in the task/operational space
        x = robot.get_link_world_positions(link_id)
        dx = robot.get_link_world_linear_velocities(link_id)
        o = robot.get_link_world_orientations(link_id)
        do = robot.get_link_world_angular_velocities(link_id)

        # Get joint positions
        q = robot.get_joint_positions()

        # Get linear jacobian
        if robot.has_floating_base():
            J = robot.get_jacobian(link_id, q=q)[:, qIdx + 6]
        else:
            J = robot.get_jacobian(link_id, q=q)[:, qIdx]

        # Pseudo-inverse: \hat{J} = J^T (JJ^T + k^2 I)^{-1}
        Jp = robot.get_damped_least_squares_inverse(J, damping)

        dv = kp * (sphere.position - x) - kd * dx
        dw = kp * quaternion_error(sphere.orientation, o) - kd * do
        # evaluate damped-least-squares IK
        dq = Jp.dot(np.hstack((dv, dw)))

        # set joint positions
        # robot.set_joint_velocities(dq, joint_ids=joint_ids)
        q = q[qIdx] + dq * dt
        robot.set_joint_positions(q, joint_ids=joint_ids)

        print('force:', FT_sensor.sense()[2])
        if FT_sensor.sense() is not None:
            if FT_sensor.sense()[2] > Fz_desired:
                print('reached desired F')
                break
        # step in simulation
        world.step(sleep_dt=dt)
    sp_z = []
    num = []
    detx = np.array([0.0, 0.0, 0.0])
    """
    Third step to keep the target force along z axis(vertical to the table), 
    and complete circular motion trajectory on plane xy
    """
    circle_center = np.array([0.5, 0.1]) # the center of the trajectory
    for t in count():
        Fz_desired = 5  # desired force
        # move sphere
        if t == 0:
            z = robot.get_link_world_positions(link_id)[2]
            sphere.position = np.array([circle_center[0] - r * np.sin(w * t + np.pi / 2), circle_center[1] + r * np.cos(w * t + np.pi / 2), z])

        # get current end-effector position and velocity in the task/operational space
        x = robot.get_link_world_positions(link_id)
        dx = robot.get_link_world_linear_velocities(link_id)
        o = robot.get_link_world_orientations(link_id)
        do = robot.get_link_world_angular_velocities(link_id)

        # Get joint positions
        q = robot.get_joint_positions()

        # Get linear jacobian
        if robot.has_floating_base():
            J = robot.get_jacobian(link_id, q=q)[:, qIdx + 6]
        else:
            J = robot.get_jacobian(link_id, q=q)[:, qIdx]

        # Pseudo-inverse: \hat{J} = J^T (JJ^T + k^2 I)^{-1}
        Jp = robot.get_damped_least_squares_inverse(J, damping)
        # Apply the admittance control
        Fz_current = FT_sensor.sense()[2]  # record the current Fz
        print('Fz: ', Fz_current)
        Fz_error = Fz_current - Fz_desired  # record the current error

        # set the M\D\K parameters by heuristic method, these parameters may have a good result
        # M = 1
        # D = 9500
        # K = 500000
        M = 0.2
        D = 4250
        K = 100000
        # Refer the formula (33) in this article [1]
        # the formula is theta_x(k) = Fc(k)*Ts^2+Bd*Ts*theta_x(k-1)+Md*(2*theta_x(k-1)-theta_x(k-2))/(Md+Bd*Ts+Kd*Ts^2)
        numerator = Fz_error * np.square(dt) + D * dt * detx[1] + M * (2 * detx[1] - detx[2])
        denominator = M + D*dt + K*np.square(dt)
        detx_ = numerator / denominator
        print (detx_)
        detx[2] = detx[1]
        detx[1] = detx[0]
        detx[0] = detx_

        zzz = sphere.position[2] + detx[0]

        # circle_center the the centre of the circle trajectory on the table
        sphere.position = np.array([circle_center[0] - r * np.sin(w * t + np.pi / 2), circle_center[1] + r * np.cos(w * t + np.pi / 2), zzz])
        dv = kp * (sphere.position - x) - kd * dx  # compute the other direction tracking error term


        sp_z.append(sphere.position[2])
        num.append(t)

        dw = kp * quaternion_error(sphere.orientation, o) - kd * do
        # evaluate damped-least-squares IK
        dq = Jp.dot(np.hstack((dv, dw)))

        # set joint positions
        q = q[qIdx] + dq * dt
        robot.set_joint_positions(q, joint_ids=joint_ids)

        # print(Fz_error, dv[2])
        if t == 2000:
            break
        # step in simulation
        world.step(sleep_dt=dt)
    plt.plot(num, sp_z)  # plot the position on the z axis
    plt.xlabel("timesteps")
    plt.ylabel("vertical position")
    plt.title("The z axis position during the task")
    plt.show()



if __name__=='__main__':
    # Create simulator
    sim = Bullet()

    # create world
    world = BasicWorld(sim)

    # flag : 0 # PI control
    flag = 0
    # create robot
    robot = UR5(sim, position=[0,0,0.5])
    # robot = 'ur5'
    robot.print_info()
    world.load_robot(robot)
    world.load_table(position=np.array([1, 0., 0.]), orientation=np.array([0.0, 0.0, 0.0, 1.0]))
    # define useful variables for IK
    dt = 1. / 240
    link_id = robot.get_end_effector_ids(end_effector=0)
    print('link_id:',link_id)
    joint_ids = robot.joints  # actuated joint
    damping = 0.01  # for damped-least-squares IK
    wrt_link_id = -1  # robot.get_link_ids('iiwa_link_1')
    qIdx = robot.get_q_indices(joint_ids)
    print('qidx:', qIdx)

    # define gains
    kp = 500  # 5 if velocity control, 50 if position control
    kd = 5  # 2*np.sqrt(kp)

    # create sphere to follow
    sphere = world.load_visual_sphere(position=np.array([0.5, 0., 0.5]), radius=0.05, color=(1, 0, 0, 0.5))
    sphere = Body(sim, body_id=sphere)

    # set initial joint p
    # Positions (based on the position of the sphere at [0.5, 0, 1])
    robot.reset_joint_states(q=[0.0, -1.57, 1.57, -1.57, -1.57, -1.57])

    # define amplitude and angular velocity when moving the sphere
    w = 0.01
    r = 0.1

    # I set the reference orientation to a constant
    sphere.orientation = np.array([-0.49940256, 0.5001992, 0.50019888, 0.50019888])

    FT_sensor = sensors.JointForceTorqueSensor(sim, body_id=robot.id, joint_ids=5)
    # The plotting handle
    plot = EeFtRealTimePlot(robot, sensor=FT_sensor, forcex=True, forcey=True, forcez=True,
                            torquex=True, torquey=True, torquez=True, num_point=1000, ticks=24)
    # FT_ = np.zeros(6)

    plot_t = Thread(target=plotting_thread, args=[plot], name='plotting task')
    manipulator_t = Thread(target=manipulator_thread, args=(world, robot, sphere, FT_sensor), name='manipulator task')

    thread_pools = [plot_t, manipulator_t]
    for thread in thread_pools:
        thread.start()

    for thread in thread_pools:
        thread.join()

Dockerfile simplify (suggestion)

Hi,
Thanks for the great work. I tried to use it by building the image from the dockerfile. However, I found the dockerfile includes operations of downloading a lot of big files (such as cuda, cudnn tools, gazebo models), which costs a lot of time and is prone to throw errors because of the bad internet connection or any other problems (such cases happened to me). I think a more reasonable way might be downloading the necessary big files in advance and use COPY command to place them in the right place. Through this way, even the building process failed, you don't have to download the files again.
I am not very familiar with docker, so if I am wrong, please overlook the above concerns.

Python version when installing Mujoco

As the installation instructions suggest to have either 2.7 or 3.5, the Mujoco installation fails as Mujoco 2.0 is supposed for python >=3.6 . Therefore, we need to mention this when installing Mujoco as an alternative simulator. Another solution would be to force the pip install with 3.6 version.

need doc on frame conventions

The docs really need a section on frame conventions. Here is what I've discovered:

  • robot.position is NWU in meters
  • robot.orientation is [q1, q2, q3, w]
  • robot.linear_velocity is NWU earth frame, in m/s
  • eulers are have roll +ve right, pitch +ve down, yaw +ve counter-clockwise from above (in radians)
  • robot.angular_velocity is in earth frame (this one is really non-obvious!). Needs to be rotated by multiplying by a rotation matrix for orientation to get to body frame for gyro equivalent. angular_velocity is +ve right for roll, +ve down for pitch, +ve counter-clockwise (from above) for yaw
  • acceleration seems to not be available (always gives zero) so needs to be calculated by differentiating velocity in earth frame, adding gravity and rotating to body frame
  • when world axes are shown, red is north, green is west, blue is up

I think it would be really nice to have to_earth_frame and to_body_frame helper functions

The collision shape problem of concave object

I have got the urdf model of a hole from solidworks, and also a mesh file of the hole. But when i try to assemble the peg into the hole. I found the collision shape of the hole is cylinder which is solid not contain the hole of the cylinder.
Could you tell me how to confige this stl model or use which of software to config this property

this is that model stl
hole_f.STL.tar.gz

Something Error when try to connect with ros

When I try to run the file pyrobolearn/examples/middlewares/bullet_ros_publisher.py, some errors occur, File "/pyrobolearn/pyrobolearn/utils/parsers/robots/urdf_parser.py", line 39, in init
super().init(filename)
TypeError: super() takes at least 1 argument (0 given)
so I have changed to super(URDFParser, self).init(filename)
another error occurs when I try to load the kuka_iiwa model in bullet_ros_publisher.py
in file /pyrobolearn/pyrobolearn/simulators/middlewares/robots/iiwa14.py
if self.is_publishing: q = self.subscriber.get_joint_positions() dq = self.subscriber.get_joint_velocities() tau = self.subscriber.get_joint_torques() if q is None: # +++++ return 0 # +++++ if len(q) > 0: q_indices = None if joint_ids is None else self.q_indices[joint_ids] if q_indices is not None: q[q_indices] = positions if velocities is not None: dq[q_indices] = velocities
the symbol '+++++' part is added to solve the problem of if q is None, the NoneType has no len()

Recent versions of Torch (>1.1.0) produces errors in GP model training

Our requirements file indicates that the torch library can be any >=1.1.0, however, it seems there were some relevant changes in recent versions (>1.1.0) that produces the following error when using GPR models (for example, when running gpr.py in the examples folder)

Traceback (most recent call last): File "/home/rol2rng/Code/IIT/pyrobolearn_fork/examples/models/gpr.py", line 41, in <module> f = model.sample(x, num_samples=10, to_numpy=True)

File "/home/rol2rng/Code/IIT/pyrobolearn_fork/pyrobolearn/models/gp/gp.py", line 665, in sample return self._convert_to_numpy(f.sample(num_samples))

File "/home/rol2rng/Code/IIT/pyrobolearn_official/venv35/lib/python3.5/site-packages/gpytorch/distributions/multivariate_normal.py", line 167, in sample return self.rsample(sample_shape=sample_shape, base_samples=base_samples)

File "/home/rol2rng/Code/IIT/pyrobolearn_official/venv35/lib/python3.5/site-packages/gpytorch/distributions/multivariate_normal.py", line 136, in rsample res = covar.zero_mean_mvn_samples(num_samples) + self.loc.unsqueeze(0)

File "/home/rol2rng/Code/IIT/pyrobolearn_official/venv35/lib/python3.5/site-packages/gpytorch/lazy/lazy_tensor.py", line 1112, in zero_mean_mvn_samples covar_root = self.root_decomposition()

File "/home/rol2rng/Code/IIT/pyrobolearn_official/venv35/lib/python3.5/site-packages/gpytorch/lazy/lazy_tensor.py", line 901, in root_decomposition )(*self.representation())

File "/home/rol2rng/Code/IIT/pyrobolearn_official/venv35/lib/python3.5/site-packages/gpytorch/functions/_root_decomposition.py", line 64, in forward eigenvalues, eigenvectors = lanczos_tridiag_to_diag(t_mat)

File "/home/rol2rng/Code/IIT/pyrobolearn_official/venv35/lib/python3.5/site-packages/gpytorch/utils/lanczos.py", line 166, in lanczos_tridiag_to_diag return batch_symeig(t_mat)

File "/home/rol2rng/Code/IIT/pyrobolearn_official/venv35/lib/python3.5/site-packages/gpytorch/utils/eig.py", line 28, in batch_symeig eigenvalues[i] = evals.masked_fill_(1 - mask, 1)

File "/home/rol2rng/Code/IIT/pyrobolearn_official/venv35/lib/python3.5/site-packages/torch/tensor.py", line 362, in __rsub__ return _C._VariableFunctions.rsub(self, other)

RuntimeError: Subtraction, the -operator, with a bool tensor is not supported. If you are trying to invert a mask, use the~orlogical_not() operator instead.

This does not happen when installing specifically torch 1.1.0. However, note that the latest torch version requires Python >=3.6.

So, short-term solution: Fix the torch version in requirements to be == 1.1.0. Long-term solution, identify the required change in the sampling function of GP according to the latest Torch version.

Incompatible MuJoCo installations

As discussed with @lrozo I document the following issue here:
I have another MuJoCo installation on my system, which I cannot touch for different reasons. I did not install MuJoCo via the provided pyrobolearn installation script.
In that case, pyrobolearn throws an error as follows when trying to run any example.

Traceback (most recent call last):
  File "examples/imitation/trajectory_reproduction_kuka_dmp.py", line 12, in <module>
    from pyrobolearn.simulators import Bullet
  File "/home/gim2rng/pyrobolearn_ws/pyrobolearn/pyrobolearn/__init__.py", line 24, in <module>
    from . import simulators
  File "/home/gim2rng/pyrobolearn_ws/pyrobolearn/pyrobolearn/simulators/__init__.py", line 31, in <module>
    import mujoco_py
  File "/home/gim2rng/.virtualenvs/venv_py3/lib/python3.6/site-packages/mujoco_py/__init__.py", line 2, in <module>
    init_config()
  File "/home/gim2rng/.virtualenvs/venv_py3/lib/python3.6/site-packages/mujoco_py/config.py", line 37, in init_config
    raise error.MujocoDependencyError('Found your MuJoCo license key but not binaries. Please put your binaries into ~/.mujoco/mjpro131 or set MUJOCO_PY_MJPRO_PATH. Follow the instructions on https://github.com/openai/mujoco-py for setup.')
mujoco_py.error.MujocoDependencyError: Found your MuJoCo license key but not binaries. Please put your binaries into ~/.mujoco/mjpro131 or set MUJOCO_PY_MJPRO_PATH. Follow the instructions on https://github.com/openai/mujoco-py for setup.

Continuous Integration?

Before you go online with Pyrobolearn at CoRL, you might want to consider adding a slim CI-pipeline, e.g. using travis, which is free for open-source projects.

In that context, you may also consider switching to a pull-request driven development, since pushing to master is not ideal, and will certainly scare potential users away. See e.g. the following discussion.

Inverse the coding and the shebang line

Currently, in every Python file, the following is written at the beginning:

# -*- coding: utf-8 -*-
#!/usr/bin/env python

This is incorrect, and should be replaced by:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

The shebang (#! line) must always be the first, because the kernel looks at the first two bytes of an executable, and if they happen to be #!, then the kernel interprets the rest of the line as the binary that actually needs to be run. See the following link for more information: https://www.python.org/dev/peps/pep-0263/

Solution:

find . -type f \( -name "*.py" ! -name "__init__.py" \) -exec sed -i '/# -\*- coding: utf-8 -\*-/d' {} +
find . -type f \( -name "*.py" ! -name "__init__.py" \) -exec sed -i '2 i\# -*- coding: utf-8 -*-' {} +

The first bash line will delete the encoding in every python file (except the __init__.py files), then the second line will put the encoding at the second line of every python file (except the __init__.py files).

Giant repo, frequent download fail

I tried git clone and download zip multiple times and failed (network error)
Upon succeeding download a master zip, I realized the repo is almost 1GB in size.

Should it be offloaded to Releases? Or split to many sub-repos?

The problem of subscribing the topic which is published through ros to realize the force control

I hope to get the force and torque information reciving from Ati senors in the simulation environment. But in your package is hard to realize. But I have found a package which can measure the ft information, but the topic is published in gazebo.
So I want to know how could I let the robot model in your environment stay the same as the robot environment in gazebo meanwhile passing the force/torque measurement from gazebo into your environment to realize the force control.

Error while running pendulum control environment in pyrobolearn

I am getting the following error when I run the pendulum.py file:

ImportError: cannot import name 'robot_names_to_classes' from partially initialized module 'pyrobolearn.robots' (most likely due to a circular import) (/home/$USER/$DIRECTORY/pyrobolearn-master/pyrobolearn/robots/__init__.py)

Cloning via https

If the user is behind a proxy/firewall, this line may throw an error:

Cloning into 'dart'... fatal: unable to connect to github.com: github.com: Name or service not known

Better try cloning via https whenever possible. This also applies to other installation scripts using the same git-clone as this.

singular matrix on training GMM model

Recently, I often encountered the singular matrix for training the GMM using your provided GMM class. However, the same data can be successfully trained. Would you have any time to check it out?

File "../models/gaussian.py", line 421, in pdf
return mvn.pdf(x, self.mean, self.cov, allow_singular=False) # default
File "/home/user/anaconda3/envs/py37/lib/python3.7/site-packages/scipy/stats/_multivariate.py", line 516, in pdf
psd = _PSD(cov, allow_singular=allow_singular)
File "/home/user/anaconda3/envs/py37/lib/python3.7/site-packages/scipy/stats/_multivariate.py", line 165, in init
raise np.linalg.LinAlgError('singular matrix')
numpy.linalg.LinAlgError: singular matrix

install pyrobolearn error

When I run $ pip install -e .

ERROR: Command errored out with exit status 1:
command: /home/wq/env/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/wq/pyrobolearn/setup.py'"'"'; file='"'"'/home/wq/pyrobolearn/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-31a3tcbj
cwd: /home/wq/pyrobolearn/
Complete output (7 lines):
Traceback (most recent call last):
File "", line 1, in
File "/home/wq/pyrobolearn/setup.py", line 21, in
reqs = [str(ir.req) for ir in install_requires]
File "/home/wq/pyrobolearn/setup.py", line 21, in
reqs = [str(ir.req) for ir in install_requires]
AttributeError: 'ParsedRequirement' object has no attribute 'req'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

The same error occurs in the python 3.5/3.6 environment.

IK with gripper

Hello!! Is there some way to perform an inverse kinematics considering a gripper or other thing attached to the end link of the robot using pyrobolearn? Because I just can do IK considering the end link of the robot, but I'm using a gripper. So, I would like to do it by sending a position referring to the gripper. Thanks!

trajectory visualization

How can we implement the trajectory visualization of robot end-effector as shown in your paper?

libnlopt.so.0: cannot open shared object file: No such file or directory

Hi! when i was preparing the env for the package, i have follow the instruction in the readme text, Everything is ok no error! But when i ran the pyrobolearn/examples/kinematics/forward it pull the error like that!

Some ROS packages were not found... Skipping prl.simulators.middlewares.ROS...
pybullet build time: Nov 17 2019 13:30:40
current_dir=/home/tiboy/py3.5/lib/python3.5/site-packages/pybullet_envs/bullet
Dart could not be found on this system... Skipping prl.simulators.Dart...
MuJoCo could not be found on this system... Skipping prl.simulators.Mujoco...
Raisim could not be found on this system... Skipping prl.simulators.Raisim...
RBDL could not be found on this system... Skipping priorities.RBDLModelInterface...
Traceback (most recent call last):
File "/home/tiboy/simulation/pyrobolearn/pyrobolearn/optimizers/nlopt_optimizer.py", line 15, in
import nlopt
File "/home/tiboy/py3.5/lib/python3.5/site-packages/nlopt.py", line 35, in
_nlopt = swig_import_helper()
File "/home/tiboy/py3.5/lib/python3.5/site-packages/nlopt.py", line 31, in swig_import_helper
_mod = imp.load_module('_nlopt', fp, pathname, description)
File "/home/tiboy/py3.5/lib/python3.5/imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "/home/tiboy/py3.5/lib/python3.5/imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: libnlopt.so.0: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/tiboy/simulation/pyrobolearn/examples/kinematics/inverse/ik.py", line 14, in
from pyrobolearn.simulators import Bullet
File "/home/tiboy/simulation/pyrobolearn/pyrobolearn/init.py", line 98, in
from . import priorities
File "/home/tiboy/simulation/pyrobolearn/pyrobolearn/priorities/init.py", line 13, in
from . import solvers
File "/home/tiboy/simulation/pyrobolearn/pyrobolearn/priorities/solvers/init.py", line 10, in
from .nlp_task_solver import NLPTaskSolver
File "/home/tiboy/simulation/pyrobolearn/pyrobolearn/priorities/solvers/nlp_task_solver.py", line 15, in
from pyrobolearn.optimizers.nlopt_optimizer import NLopt
File "/home/tiboy/simulation/pyrobolearn/pyrobolearn/optimizers/nlopt_optimizer.py", line 17, in
raise ImportError(e.str() + "\n HINT: you can install nlopt via pip install nlopt.")
ImportError: libnlopt.so.0: cannot open shared object file: No such file or directory
HINT: you can install nlopt via pip install nlopt.

But i have pip the nlopt its version is 2.4.2.post2. So can you help me fix these problem

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.