Comments (10)
@shivanikishnani @tunglm2203 What version of python, pytorch, mujoco, and multiworld are you using? I have a slight suspicion that that may be related to the difference in performance. I'm also unsure what else could be the cause.
I'm using the following:
- multiworld: f711cdb (git hash)
- python: 3.5.2
- torch: 0.4.1.post2
- mujoco_py: 1.50.1.59
- gym: 0.10.5
from rlkit.
Did you modify the example script? The fact that the Average Returns start at -40 seems odd. Also, I wouldn't actually look at "Average return" too much. Distances in the latent space can be hard to interpret. Here's an example of what a run should look like, which also plots the most intuitive metrics (final hand/puck distance):
I made this plot using my version of viskit. I'll run more seeds now with the latest code, but this should work consistently.
from rlkit.
Okay, I ran more seeds and it does seem to have high variance in the performance, as shown below.
If it weren't for the green and purple curves, it'd basically be the same as in the paper. I'll update here if I find out why it's such higher variance, but I think this confirms that the code is mostly working.
from rlkit.
Hi @vitchyr, I am trying to run RIG algorithm with Pusher environment, I also faced the same above problem. I try to run 5 different seeds, the AverageReturn and Final hand_distance Mean seem to the same scale with yours, but the Final puck_distance Mean is different, it's similar to your green and purple curves.
There 3 experiments (red, green, purple) I am still running.
How can reproduce the same results in paper?
from rlkit.
The RIG implementation currently uses the "online VAE training." However, the main experiments in the RIG paper use a pre-trained VAE.
The settings can be found on this branch, and should produce results more similar to the RIG paper: https://github.com/vitchyr/rlkit/tree/v0.1.2
from rlkit.
Hi!
Are the results you posted #31 (comment) based on the Online VAE or are they using a pre-trained VAE?
I've been trying to train RIG's Pusher using version 0.1.2, but my results don't seem anything like the ones in the paper. I tried it the result with multiple seeds, as well. The parameters are the same as in example/rig/pusher/rig.py. An example of one of the results is below. If the issue is just that there is a lot of variance while using different seeds, do you know why that may be? Could you tell me what seeds you were using?
oracle.py works fine, by the way.
RIG's Reacher ('SawyerReachXYZEnv-v1' and also 'SawyerReachXYZEnv-v0) also don't seem to be working as indicated in the paper and have high variance. For reacher, I'm training the VAE on 100 images, as indicated in the paper, and am training it for 100 epochs. I'm running the entire algorithm 100 epochs as well. The other parameters are the same as for Pusher.
I'd really appreciate it if you could let me know if something is wrong. Thanks!
from rlkit.
@vitchyr I tested it with your linux-gpu-env.yml's environment as well, which had the same package versions. I was using the most recent version of multiworld, but have now installed the package at that commit.
I had to install the some libraries that were missing from your environment specification that are needed to run the experiments- including torchvision.* My torchvision install also installed a different version of pytorch (1.3.0), which is what was being used later.
What version of torchvision are you using and how did you install it without affecting your pytorch installation? It's being used in vae_trainer.py.
- Let me know if you want me to create a pull request with an updated yml file
from rlkit.
from rlkit.
It was either the old version of multiworld or pytorch which made a difference, but the experiments seem to be learning now.
However, they seem to have a much higher variance than the ones you posted above. I'm running it with multiple seeds to see if those make a difference. The final hand distance of the pusher is given below, for some reason, it's not plotting along with reacher. I'm smoothing out the curves.
from rlkit.
@shivanikishnani Did running multiple seeds make a difference?
from rlkit.
Related Issues (20)
- rlkit/torch/networks/stochastic not installed HOT 1
- unable to create the conda environment with linux-cpu-env.yml HOT 2
- Issue SMAC algorithm HOT 4
- multi-GPU optimised implementations for running algorithms HOT 1
- Doubt on Q-function loss in AWAC HOT 1
- Question about VAEPolicy in rlkit.torch.sac.policies HOT 2
- CustomMDPPathCollector is not found HOT 2
- Doubt on advantage calculation to update the policy on AWAC.
- Position Control with mujoco-py
- Cannot reproduce the results of IQL on antmaze HOT 1
- High Memory & Disk Requirement for SMAC HOT 1
- Skew-fit gaussian_identity_variance
- AWAC doesn't profit from offline data HOT 4
- IQL: make checkpoints public
- Could someone provide right environment installation procedure? HOT 4
- Python3.5 is not suitable for this project! HOT 1
- Why I could not see result fileοΌ
- SAC log_alpha different from paper HOT 1
- IQL results different with the paper HOT 1
- Reproduce and create figures results in AWAC.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rlkit.