lilin-hitcrt / ssc Goto Github PK
View Code? Open in Web Editor NEWSemantic Scan Context
License: MIT License
Semantic Scan Context
License: MIT License
Hi, thanks for your work and code. I notice that the F1 max score of SG_PR is quite different, actually much lower, in your paper. I'm quite interested in a place recognition task, so I'd like to go deep into details. Did I miss any key points? Could you please explain the result? Thanks and best wishes.
Dear authors,
Recently I've been reading the code of SSC and RINet. I find that during evaluation, one needs pairing files with paths like /pairs_kitti/neg_100/00.txt'
. However, I don't see any scripts directly generating these pairing files ended with .txt
. So I'm wondering if they are generated with some other scripts from the .npz
file produced by gen_pairs.py
? And does that mean you avoid to evaluate (or define PN for) any pairs of poses with their distances between 3m and 20m?
Thanks a lot and hope to get your reply!
@lilin-hitcrt
Thanks for your great work!
I have a question about integrating to Lidar Odometry package such as FAST_LIO
I know there is already have Scan Context (Kim) integrate into FAST_LIO to perform loop closure
Is it possible to get semantic data when run FAST_LIO and using SSC to perform loop closure as well ?
I would appreciate it if you could give me an engineering tip. 😄
Thanks,
Thank you so much for your code and detailed explanation, it's of great help to me.
And I have a few questions about the data used for ploting p-r curves. As you said, the first column of each data file is the similarity score, and the second column is the ground truth. I wonder whether all the similarity scores are obtained through the similarity scoring mentioned in the paper. Because there are numbers bigger than 1 in some of the files, for example, results/kitti/neg_100/SC/00.txt.
Another question is that about the evaluation samples. I noticed that only pairs are provided, but how to measure the effectiveness of the topk candidates? As usually 25 candidates are generated for each query frame, but it seems to me that only one candidate is evaluated.
Thanks a lot and I really appreciate it if you can reply to me at your earliest convenience:)
hi, thanks for your excellent work, I have a question when I reading the code, why the code of finding the shortest range point is commented out and then the ssc_dis is determined by the iteration sequence of all the points? It seems it will use the lastest point in the iteration.
// if(ssc_dis.at<cv::Vec4f>(0, sector_id)[3]<10||distance<ssc_dis.at<cv::Vec4f>(0, sector_id)[0]){ ssc_dis.at<cv::Vec4f>(0, sector_id)[0] = distance; ssc_dis.at<cv::Vec4f>(0, sector_id)[1] = filtered_pointcloud->points[i].x; ssc_dis.at<cv::Vec4f>(0, sector_id)[2] = filtered_pointcloud->points[i].y; ssc_dis.at<cv::Vec4f>(0, sector_id)[3] = label; // }
Hi, thanks again for your code and evaluation results. I'm reproducing the results of your method and the program is quite slow when I run ./eval_seq in the bin folder. As a result, it would take a long time to test even one sequence. So I tend to try the eval_pair. Again, it takes 10109ms to do "ssc.getScore" function. In your paper, however, the average time of making descriptors is approx 2.563 ms. Have I missed any details? My platform info: Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz.
Hi, thank you very much for your great work and for the code.
I have read your paper and I appreciated the level of information that you provided. However I still have a question about the encoding function that you have used for your descriptor when you didn't use semantic data (while calculating the contribution of individual components).
Can you please share the block representation used and the similarity score used in this case ?
Thank you very much !
Best regards
Hi, thanks for your open-source code which includes the results of those compared methods. However, I haven't found any part of the code to calculate the EP score. Would you please illustrate more details on this topic? Is the evaluation pair lists still be neg_100 as you mentioned in #1 for the F1 max score? How will you then define PR0 and RP100 since the pair list is not constructed by order?
@lilin-hitcrt Hello, I have read your article, thank you very much for your excellent work. I am now trying to port SSC to the LOAM algorithm. I have any questions.
I would like to ask: I have a 16-line lidar. If I want to run the SSC-based LOAM algorithm in real time, where does the semantic label come from, and what other issues do I need to pay attention to?
Looking forward to your reply!
@lilin-hitcrt Hi,I want to know how to get the label of the pointcloud,such as KITTI/odometry/dataset/sequences/08/labels/.Is there any link can I download?Thanks for your answer!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.