Giter VIP home page Giter VIP logo

neural-astar's People

Contributors

yonetaniryo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural-astar's Issues

question about pq_astar

I was looking at this particular line and wondering, if there is a bug in this code. Shouldn't we also consider the condition where a node is already in the open list, and maybe has lesser fnew than the one in the open list ?

Possibility of non-square map

Hi All,

I have attempted to use the repo to perform neural A* search on a rectangle like graph with size (8, 1, 32, 64). However it seems that 'get_heuristic' will output an einsum error. I am wondering if there's any potential fix to this problem

"goal not found" error with pq_astar

I have a 416x416 empty map (all zeros), and I have appropriate start and goal maps. When using pq_astar, it throws an output saying "goal not found" and returns empty histories and paths tensors with all zeros.

An OSerror on Colab

Hi, could you please help me to fix the OSerror on Colab when I first time run neural-astar on Colab as the guide said? Many Thanks!

This is the code for downloading checkpoint on the Colab example you shared in README:

import torch
from neural_astar.planner import NeuralAstar, VanillaAstar
from neural_astar.utils.training import load_from_ptl_checkpoint  # the error line

device = "cuda" if torch.cuda.is_available() else "cpu"

neural_astar = NeuralAstar(encoder_arch='CNN').to(device)
neural_astar.load_state_dict(load_from_ptl_checkpoint("../model/mazes_032_moore_c8/lightning_logs/version_0/checkpoints/epoch=33-step=272.ckpt"))

vanilla_astar = VanillaAstar().to(device)

The given error, which I never met, is below:

OSError: /usr/local/lib/python3.10/dist-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZN2at4_ops5zeros4callEN3c108ArrayRefINS2_6SymIntEEENS2_8optionalINS2_10ScalarTypeEEENS6_INS2_6LayoutEEENS6_INS2_6DeviceEEENS6_IbEE

In comparison with the outputs in the example:

/home/yonetani/programs/omron-sinicx/neural-astar/.venv/lib/python3.9/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm
load ../model/mazes_032_moore_c8/lightning_logs/version_0/checkpoints/epoch=33-step=272.ckpt

The path format is really similar to that in Ubuntu. Is that meaning you can connect Colab into Ubuntu virual machine? I am really a newer of Ubuntu and hope that you can help me to fix this OSerror. Many thanks!

Role of guidance map

Hi,
I have a some confusions regarding understanding the role of guidance map.

the paper says

Given a problem instance (i.e., an environmental map annotated with start and goal points), the encoder transforms it into a scalar-valued map representation referred to as a guidance map;
The differentiable A* module then performs a search with the guidance map to output a search history and a resulting path.

Guidance map is U-NET encoder and It is performing segmentation right
1-)What I understand is that initially, on the raw or binary image it outputs the the segmented regions containing blocked and unblocked areas right?

2-)scalar-valued map representation: Does it also outputs the A* path when we backpropagate by taking start and goal nodes? and then later on we update these scalar values termed as guidance cost using differentiable A module* to find the best optimal path as Figure 2 shows.

dataloader for sdd

Hi, Do you have the dataloader for SDD? For CSM and TiledMP, there are a single .npz for each dataset; but for SDD, there are 8 types each type has some "video" subfolder where each subfolder has several .npz. So I am wondering if you have the dataloader for SDD for reproduce? Thanks.

could you please elaborate the mazes_032_moore_c8.npz file

Untitled

I opened the .npz file that is given.

Can you please tell me what dataset it is and how did you create it?

Also the what are these dimensions that are highlighted. I assume 800 is the training size and 32 * 32 is the size of image that is mentioned in the paper

800 is the input map
800 * 1 is 1 hot matrix representing start position and other 800 * 1 representing the goal position?
What is 800 * 8 * 1 representing?

and the rest 100 represents the validation and test split?

Introduce PTL

  • Update train.py to use PTL functions
  • Update PyTorch version

Metrics visualization during training and evaluation

On the "minimal" branch, differently from what was done in the example.ipynb file of the previous version of the repo (the one without pytorch lightning, similar to the branch "3-improve-repository-organization"), it seems that you don't use the logs of the Opt, Exp and Hmean metrics when the training is performed. I would like to visualize those metrics, but the "metrics" folder isn't created by running the train.py script. Thank you for your support.

What's the reason behind using initial obstacles_maps of ones and zeros result in IndexError?

Hello,

I am wondering what it means when the line throws an error message such as: "IndexError: index -9223372036854775808 is out of bounds for dimension 1 with size 1024"?

You can get this error for example when running the train.py script and setting learn_obstacles=True on line, and initializing map_obstcles to zeros (torch.zeros_like(start_maps)) instead of ones (torch.ones_like(start_maps)) on line. Why does initial obstacles_maps of zeros throw an error?

Support for unstructured graphs

Hi @yonetaniryo,

Thank you for the amazing work, which I think paves the way for more principled research in neural search-based planning. As the title suggests, and seeing how the repo is still quite active in commits (early 2023), I would like to know if the NA* model now supports unstructured (non grid-like) graphs, which could really benefit from your work. If not, did you reason about this and the biggest changes that this addition would require?

Off the top of my head, I was thinking that substituting the convolutional encoder with a graph convolution network (GCN) that would still produce a dense cost matrix, and maybe re-thinking some of the parameters, such as the temperature $\tau$ which could becomputed from the diameter of the graph, rather than the extent of the grid, should be a sufficient adaptation in full respect of your framework. I am open to hearing your thoughts.

On a side note, I also read the follow-up on neural weighted A*, which you also mentioned in #12, as I agree with the idea of learning a more flexible heuristic function. Since that work also focuses on 1-hop grid-like graphs, is there any well-known admissible heuristic function for unstructured graphs, much like the Chebyshev heuristic for grid-like ones? I am particularly interested in the comment that you made in #12, about NGA* not working well yet, despite it soundly outperforming NA* in its case studies, besides the number of expanded nodes. Were you referring to some specific aspects, or was it more of a general statement?

The result of pq_A* and differentiable A* looks same.

Hi! During running the codes, I advertantly noticed that the contrastical experiments for neural A* is not standard A* but called differentiable A* which is really novel for me. However, after I change it into standard A* (just as the content said, I changed all of use_differentiable_astar variable into false), I found that standard A* returned an identical result as differentiable A*.
I have to say that I am not quite familiar with the algorithm of breadth first searching. Therefore, could you please briefly introduce me about differentiable A* algorithm or recomment me any resource to learn it? Additionally, could you please tell me why the differentiable A* returned the same result as standard A* in these maps? Many thanks and waiting for your reply!

Some confusions

Hi, I have some confusion about this amazing work.

  1. I would like to inquire about the MP Dataset results described in the article. I noticed that you separated the environment groups for training and testing. How were the final results calculated after integrating these groups?
  2. Regarding the ground-truth, I was wondering why Dijkstra's algorithm was used instead of standard A*. Additionally, when I utilized your source code, I found that the ground-truth (opt_traj) generated by Dijkstra's algorithm may not always be optimal. Could you offer any insight on this matter?

example

Add standard A*

Add a standard A* search with priority queue that can be used with trained encoders for faster search

Is diff a* search admissible?

Hi,

I am very interested in your work and want to get a good understanding of this code. However, I have a question about how heuristic is implemented here.

I am looking at this line of code in differentiable_astar.py. From my understanding, the heuristic function produces a h value to be greater than 1 while the costmap values are between 0 and 1. If the g_ratio value is set as default (0.5), wouldn't this heuristic value violate the admissible assumption for a*? Yet, when I ran the example code you provided here, the model seemed to be able to train successfully and produce good result. I am not sure why this works. Maybe I am missing something? Could you please help me with this?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.