Comments (7)
I noticed that iqn apex algorithm was used in the inference code you provided, but the code of auxiliary task mentioned in the paper is very important to me, and I also hope you can provide the code of training part. Thanks a lot!
from rainbow-iqn-apex.
I have released the inference code in this repo.
from rainbow-iqn-apex.
I have released the inference code in this repo.
After I read the code, I found that for the reinforcement learning part, there is no code for that as well as no data generation from that.
from rainbow-iqn-apex.
@marintoro And I also found that I cannot send the email through the email on paper for communication.
Happy to find you here, and here is my email content:
Hi,
I’m a reader on your paper, and I want to ask some questions about it:
-
I found that you didn’t fully open source the code in RL on this paper said, even no dataset about the RL expert? As I try to find that how you get the RL expert and then find out there is no code or data about it.
-
In section 4.2, you said that desired rotation is because of previous two reward settings cannot act right when there is a static obstacle in front of the car, you want this rotation reward to explore more, but what’s the optimal trajectory? It’s still the lane waypoint angle? But in this way, you will still get the agent stuck at the lane? Since when they try to move, the rotation reward will be low as they departure the waypoint rotation
Is it Carla default nav agent to give you the optimal trajectory? -> but as you said at introduction, this one just stop at any obstacle, it will not try to do lane change.
Looking forward for your reply.
Have a good day.
Best wishes,
Kin (from HKUST)
from rainbow-iqn-apex.
@marintoro And I also found that I cannot send the email through the email on paper for communication. Happy to find you here, and here is my email content:
Hi,
I’m a reader on your paper, and I want to ask some questions about it:
- I found that you didn’t fully open source the code in RL on this paper said, even no dataset about the RL expert? As I try to find that how you get the RL expert and then find out there is no code or data about it.
- In section 4.2, you said that desired rotation is because of previous two reward settings cannot act right when there is a static obstacle in front of the car, you want this rotation reward to explore more, but what’s the optimal trajectory? It’s still the lane waypoint angle? But in this way, you will still get the agent stuck at the lane? Since when they try to move, the rotation reward will be low as they departure the waypoint rotation
Is it Carla default nav agent to give you the optimal trajectory? -> but as you said at introduction, this one just stop at any obstacle, it will not try to do lane change.
Looking forward for your reply.
Have a good day.
Best wishes,
Kin (from HKUST)
@Kin-Zhang
Hello dear Kin-Zhang! I found the same problem for this implementation,too. I just wanna know that did you find any solution for this issue or not?
Looking forward for your reply.
Have a good day.
Best wishes,
Zahra (from IUT university).
from rainbow-iqn-apex.
Hello, I am a reader of your remarkable paper, "End-to-End Model-Free Reinforcement Learning for Urban Driving using Implicit Affordances." I am currently following the ReadMe instructions to run your implementation. However, I encountered an issue; I successfully ran the CARLA environment, but your code did not execute within the simulation. I received an error indicating that the RL part of the code is not accessible.
I would greatly appreciate your assistance in resolving this error, as it would be invaluable to my work. I intend to acknowledge your paper in my research.
I look forward to your reply.
Wishing you a wonderful day.
Best regards,
Zahra (from IUT University).
from rainbow-iqn-apex.
@marintoro
Hi
I really interested in another amazing paper of yours, "GRI: General Reinforced Imitation and its Application to Vision-Based Autonomous Driving."
Unfortunately, there is no implementation of that paper released.
Can you help me by providing the code? I would greatly appreciate it; having the code would be immensely beneficial for my work. I intend to acknowledge your paper in my research.
I look forward to your reply.
Best regards,
Xolovezari
from rainbow-iqn-apex.
Related Issues (1)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rainbow-iqn-apex.