Giter VIP home page Giter VIP logo

sensenet's Introduction

SenseNet

SenseNet is a sensorimotor and touch simulator to teach AIs how to interact with their environments via sensorimotor systems and touch neurons. SenseNet is meant as a research framework for machine learning researchers and theoretical computational neuroscientists.

gestures

Reinforcement learning

SenseNet can be used in reinforcement learning environments. The original code used OpenAI's gym as the base and so any code written for gym can be used with little to no tweaking of your code. Oftentimes you can just replace gym with sensenet and everything will work.

Supported Systems

We currently support Mac OS X and Linux (ubuntu 14.04), Windows mostly works, but we don't have a windows developer. We also have docker and vagrant/virtualbox images for you to run an any platform that supports them.

Install from source

git clone http://github.com/jtoy/sensenet you can run "pip install -r requirements.txt" to install all the python software dependencies pip install -e '.[all]'

Install the fast way:

pip install sensenet

Train an basic RL agent to learn to touch a missile with "6th sense":

python examples/agents/reinforce.py -e TouchWandEnv-v0

Dataset

I have made and collected thousands of different objects to manipulate in the simulator. You can use the SenseNet dataset or your own dataset.

dataset

Testing

we use pytest to run tests, to tun the tests just type "cd tests && pytest" from the root directory

running benchmarks

Included with SenseNet are several examples for competing on the benchmark "blind object classification" There is a pytorch example and a tensorflow example. to run them: cd agents && python reinforce.py

to see the graphs: tensorboard --logdir runs then go to your browser at http://localhost:6000/ python setup.py register sdist upload

sensenet's People

Contributors

guillaume-chevalier avatar jtoy avatar philtabor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sensenet's Issues

logo

need to get a logo designed soon. we need to get some concept art designed for this.
probably an image with some combination of touch / senses and brains/ artificial intelligence.

TouchWandEnv - wand sticks on object

While the wand is in contact with the object the movement slows down, giving the appearance that the wand is stuck on the object. This is noticeable during the rendering process.

Wand should not show an appreciable slowdown while in contact with the object.

Related to the use of the computeProjectionMatrixFOV and getCameraImage functions. These introduce overhead that causes the time between successive calls to _step, thus giving the appearance of a slowdown.

test suite

add some basic tests:

can load environment
can create new environment
hand registers touch

pyramid shape issues

need to debug pyramids , seems like we see/touch certain ones due to angle of the pyramid, no idea why.

confirm windows support

all of our dependencies work on windows so this code should work on windows. We want to officially confirm it works. We need someone with a windows machine to run the "cd tests && pytest" to confirm it works on windows. Ideally we would have a test that gets run on every commit to make sure window is working. Unfortunately, travis, our open source testing facility doesn't support windows, can we use another automated test system to make sure windows support works?

fix and validate reinforce.py works in gpu mode

i think the new RNN functionality kills it:
https://discuss.pytorch.org/t/training-rnn-on-gpu/3574/5

base_linkepisode: 48

1 touches in current episode <<
env.class_label: 9
Traceback (most recent call last):
File "reinforce.py", line 243, in
output = cnn_lstm(observed_touches) # Prediction
File "/home/jtoy/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in call
result = self.forward(*input, **kwargs)
File "reinforce.py", line 146, in forward
rnn_out, rnn_hidden = self.rnn(inp.view(1, 1, -1), rnn_hidden)
File "/home/jtoy/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in call
result = self.forward(*input, **kwargs)
File "/home/jtoy/anaconda3/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 91, in forward
output, hidden = func(input, self.all_weights, hx)
File "/home/jtoy/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/rnn.py", line 343, in forward
return func(input, *fargs, **fkwargs)
File "/home/jtoy/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 202, in _do_forward
flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)
File "/home/jtoy/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 224, in forward
result = self.forward_extended(*nested_tensors)
File "/home/jtoy/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/rnn.py", line 285, in forward_extended
cudnn.rnn.forward(self, input, hx, weight, output, hy)
File "/home/jtoy/anaconda3/lib/python3.6/site-packages/torch/backends/cudnn/rnn.py", line 239, in forward
fn.hx_desc = cudnn.descriptor(hx)
File "/home/jtoy/anaconda3/lib/python3.6/site-packages/torch/backends/cudnn/init.py", line 304, in descriptor
descriptor.set(tensor)
File "/home/jtoy/anaconda3/lib/python3.6/site-packages/torch/backends/cudnn/init.py", line 110, in set
self, _typemap[tensor.type()], tensor.dim(),
KeyError: 'torch.FloatTensor'

better way to define actions

right now each action is defined as an integer and then I create a lookup table/case/if statements to process the action, if the order of actions changes, the code is broken, need a more robust system to deal with actions.

better error if path to dataset not found

better error if path to dataset not found, this is current error:

python reinforce.py --data_path=../concave_objects --render --obj_type=obj
pybullet build time: Nov 30 2017 10:16:07
Vendor: NVIDIA Corporation
Renderer: NVIDIA GeForce GT 750M OpenGL Engine
Version: 4.1 NVIDIA-10.4.2 310.41.35f01
GLSL: 4.10
b3Printf: Selected demo: Physics Server
startThreads creating 1 threads.
starting thread 0
started thread 0
MotionThreadFunc thread started
Traceback (most recent call last):
File "reinforce.py", line 192, in
env = SenseEnv(vars(args))
File "../env.py", line 47, in init
self.load_object()
File "../env.py", line 78, in load_object
stlfile = files[random.randrange(0,files.len())]
File "/Users/jtoy/miniconda3/lib/python3.6/random.py", line 198, in randrange
raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width))
ValueError: empty range for randrange() (0,0, 0)
numActiveThreads = 0
stopping threads
stopThreads: Thread 0 used: 1
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed

touch / camera sensor needs to be more accurate

it works on certain angles, but not all angles. need to test and verify this. this is how the could should work:

You will need to compute the viewmatrix, based on the index finger world transform matrix. You can get this using the pybullet.getLinkState API. Then extract the position and orientation form that, and build the view matrix using pybullet.calculateViewMatrix. It is a bit complex, the racecar example (https://github.com/bulletphysics/bullet3/blob/449c8afc118a7f3629bc940c304743a084dcfac6/examples/pybullet/gym/pybullet_envs/bullet/racecarZEDGymEnv.py#L95) computes the camera based on the chassis. The finger world transform 3x3 matrix is basically fwd, left, up vectors.

documentation

we need to fully document:
environment API
agent API
benchmark API
dataset

use constraints instead of resetBasePositionAndOrientation

we can't use resetBasePositionAndOrientation because that breaks the physics simulation.
don't use "resetBasePositionAndOrientation" while simulating, it should be only if you 'reset' the simulation at the start of the simulation.

Instead, create a 'fixed' constraint, and set its transform. See for example vrhand.py ( https://github.com/bulletphysics/bullet3/blob/master/examples/pybullet/examples/vrhand.py )

hand_cid = p.createConstraint(hand,-1,-1,-1,p.JOINT_FIXED,[0,0,0],[0.1,0,0],[0.500000,0.300006,0.700000],ho)

Then instead of using resetBasePositionAndOrientation, you use

p.changeConstraint(hand_cid,e[POSITION],e[ORIENTATION], maxForce=50)

(pick your max force as suitable)

TouchWandEnv - number of moves to touch object

In the TouchWandEnv it takes the wand approximately 30,000 moves to touch the object.

In the HandEnv it takes the hand only hundreds of moves to touch the object.

I suspect this is related to the interplay between the maxForce (in the changeConstraint), the mass of the agent, and the self.move parameter

These numbers of actions to touch the object should be standardized to enable the use of a universal max_steps parameter

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.