TLDR: I discovered issues with how pruning and Djikstras_shortest_paths was being used. Optimal path planning, NOT IK calculations, (using aggressive joint delta checks) has speed up by 2000-5000x in my limited testing.
I have been working on making Descartes lately in my work and made some improvements. I discovered a few issues and have developed a fixes for them. I'm making this issue to reference in my pull-request (coming this weekend) and to share with you all the exciting results. This is related to issue #21 and issue #48.
The two most expensive components of Descartes are the IK solutions and the graph searching to find an ideal solutions. IK is O(n). Searching is something bigger like O(n^2). With IKFast, the IK can be done very quickly but it was taking minutes to solve for the optimal path with several hundred points.
The long planning time can be abridged by fixing the start and end joint positions, but you have to know those positions and it sacrifices optimality.
The reason the planner is so slow right now is primarily two fold:
- There are checks for large joint changes and on that condition edges are given huge weights when in fact they should simply be removed. Not adding an edge led to strange problems where the graph search would solve paths that were only several points long. This turned out to be an issue with the findStartVertices and findEndVertices which were actually checking all of the vertices for the absence of in-edges and out-edges respectively. Naturally if we start pruning edges then outliers in the middle of the users trajectory will start showing up as start or end points using that logic. Fixing that allows Dijkstra's search to search MUCH faster and still fail elegantly.
- The boost algorithm being used (http://www.boost.org/doc/libs/1_57_0/libs/graph/doc/dijkstra_shortest_paths.html) calculates the shortest path from a source vertex to EVERY other vertex. Currently we are running the algorithm for every start point to every end point (n^2 ops) when all we need to do is run the algorithm from every start point (n operations). Fixing this obviously means we get N times speed boosts where N is the number of end point discretizations.
Using C++11's steady clock from the chrono library, I took some poor man's measurements using the simulated objects in Godel (the smallest face closest to the robot and the biggest face) and got the following timing (all builds were with release flags):
Time w/o Pruning
6092 joint solutions -> 10467 ms
39948 joint solutions -> 211514 ms
Time w/ Pruning
6092 joint solutions -> 37 ms
39948 joint solutions -> 561 ms
Time w/ Pruning and Boost improvement
6084 joint solutions -> 5 ms
39948 joint solutions -> 41 ms
I pruned edges with single joint motions exceeding 10 degrees between points. I've also developed a system of time paramiterization with the help of @shaun-edwards and @jrgnicho which I will push this weekend as well.
Would y'all prefer these improvements seperate from the time pull request or together?