Giter VIP home page Giter VIP logo

livox_detection's Introduction

中文版本使用说明

1 Introduction

Livox SDK is the software development kit designed for all Livox products. It is developed based on C/C++ following Livox SDK Communication Protocol, and provides easy-to-use C style API. With Livox SDK, users can quickly connect to Livox products and receive point cloud data.

Livox SDK consists of Livox SDK communication protocol, Livox SDK core, Livox SDK API, Linux sample, and ROS demo.

Prerequisites

  • Ubuntu 14.04/Ubuntu 16.04/Ubuntu 18.04, both x86 and ARM (Nvidia TX2)
  • Windows 7/10, Visual Studio 2015 Update3/2017/2019
  • C++11 compiler

2 Livox SDK Communication Protocol

Livox SDK communication protocol opens to all users. It is the communication protocol between user programs and Livox products. The protocol consists of control commands and data format. Please refer to the Livox SDK Communication Protocol for detailed information.

3 Livox SDK Core

Livox SDK provides the implementation of control commands and point cloud data transmission, as well as the C/C++ API. The basic structure of Livox SDK core is shown as below:

Livox SDK Architecture

User Datagram Protocol (UDP) is used for communication between Livox SDK and LiDAR sensors. Please refer to the Livox SDK Communication Protocol for further information. Point cloud data handler supports point cloud data transmission, while command handler receives and sends control commands. And the C/C++ API is based on command handler and point cloud data handler.

The Livox LiDAR sensors can be connected to host directly or through the Livox Hub. Livox SDK supports both connection methods. When LiDAR units are connected to host directly, the host will establish communication with each LiDAR unit individually. And if the LiDAR units connect to host through Hub, then the host only communicates with the Livox Hub while the Hub communicates with each LiDAR unit.

4 Livox SDK API

Livox SDK API provides a set of C style functions which can be conveniently integrated in C/C++ programs. Please refer to the Livox SDK API Reference for further information.

4.1 Installation

The installation procedures in Ubuntu 18.04/16.04/14.04 LTS and Windows 7/10 are shown here as examples. For Ubuntu 18.04/16.04/14.04 32-bit LTS and Mac, you can get it in Livox-SDK wiki.

4.1.1 Ubuntu 18.04/16.04/14.04 LTS

Dependencies

Livox SDK requires CMake 3.0.0+ as dependencies. You can install these packages using apt:

sudo apt install cmake

Compile Livox SDK

In the Livox SDK directory, run the following commands to compile the project:

git clone https://github.com/Livox-SDK/Livox-SDK.git
cd Livox-SDK
cd build && cmake ..
make
sudo make install

4.1.2 Windows 7/10

Dependencies

Livox SDK supports Visual Studio 2015 Update3/2017/2019 and requires install CMake 3.0.0+ as dependencies.

In the Livox SDK directory, run the following commands to create the Visual Studio solution file. Generate the 32-bit project:

cd Livox-SDK/build

For Viusal Studio 2015 Update3/2017:

cmake ..

For Viusal Studio 2019:

cmake .. -G "Visual Studio 16 2019" -A Win32

Generate the 64-bit project:

cd Livox-SDK/build 

For Viusal Studio 2015 Update3:

cmake .. -G "Visual Studio 14 2015 Win64"

For Viusal Studio 2017:

cmake .. -G "Visual Studio 15 2017 Win64"

For Viusal Studio 2019:

cmake .. -G "Visual Studio 16 2019" -A x64

Compile Livox SDK

You can now compile the Livox SDK in Visual Studio.

4.1.3 ARM-Linux Cross Compile

The procedure of cross compile Livox-SDK in ARM-Linux are shown below.

Dependencies

Host machine requires install cmake. You can install these packages using apt:

sudo apt install cmake

Cross Compile Toolchain

If your ARM board vendor provides a cross compile toolchain, you can skip the following step of installing the toolchain and use the vendor-supplied cross compile toolchain instead.

The following commands will install C and C++ cross compiler toolchains for 32bit and 64bit ARM board. You need to install the correct toolchain for your ARM board. For 64bit SoC ARM board, only install 64bit toolchain, and for 32bit SoC ARM board, only install 32bit toolchain.

Install ARM 32 bits cross compile toolchain

 sudo apt-get install gcc-arm-linux-gnueabi g++-arm-linux-gnueabi

Install ARM 64 bits cross compile toolchain

sudo apt-get install gcc-aarch64-linux-gnu g++-aarch64-linux-gnu

Cross Compile Livox-SDK

For ARM 32 bits toolchain,In the Livox SDK directory,run the following commands to cross compile the project:

cd Livox-SDK
cd build && \
cmake .. -DCMAKE_SYSTEM_NAME=Linux -DCMAKE_C_COMPILER=arm-linux-gnueabi-gcc -DCMAKE_CXX_COMPILER=arm-linux-gnueabi-g++
make

For ARM 64 bits toolchain,In the Livox SDK directory,run the following commands to cross compile the project:

cd Livox-SDK
cd build && \
cmake .. -DCMAKE_SYSTEM_NAME=Linux -DCMAKE_C_COMPILER=aarch64-linux-gnu-gcc -DCMAKE_CXX_COMPILER=aarch64-linux-gnu-g++
make

Note:

  • gcc cross compiler need to support C ++11 standard

4.2 Run Livox SDK Sample

Two samples are provided in Sample/Lidar and Sample/Hub, which demonstrate how to configure Livox LiDAR units and receive the point cloud data when directly connecting Livox SDK to LiDAR units or by using a Livox Hub, respectively. The sequence diagram is shown as below:

4.2.1 Ubuntu 18.04/16.04 /14.04 LTS

For Ubuntun 18.04/16.04/14.04 LTS, run the lidar_sample if connect with the LiDAR unit(s):

cd sample/lidar && ./lidar_sample

or run the hub_sample if connect with the hub unit(s):

cd sample/hub && ./hub_sample

4.2.2 Windows 7/10

After compiling the Livox SDK as shown in section 4.1.2, you can find hub_sample.exe or lidar_sample.exe in the {Livox-SDK}\build\sample\hub\Debug or {Livox-SDK}\build\sample\lidar\Debug folder, respectively, which can be run directly.

Then you can see the information as below:

4.3 Connect to the specific LiDAR units

Samples we provided will connect all the broadcast device in you LAN in default.There are two ways to connect the specific units:

  • run sample with input options

  • edit the Broadcast Code list in source code

NOTE:

Each Livox LiDAR unit owns a unique Broadcast Code . The broadcast code consists of its serial number and an additional number (1,2, or 3). The serial number can be found on the body of the LiDAR unit (below the QR code).The Broadcast Code may be used when you want to connect to the specific LiDAR unit(s). The detailed format is shown as below:

Broadcast Code

4.3.1 Program Options

We provide the following program options for connecting the specific units and saving log file:

[-c]:Register LiDAR units by Broadcast Code. Connect the registered units ONLY. 
[-l]:Save the log file(In the executable file's directory).
[-h]:Show help.

Here is the example:

./lidar_sample_cc -c "00000000000002&00000000000003&00000000000004" -l
./hub_sample_cc -c "00000000000001" -l

4.3.2 Edit Broadcast Code List

Comment the following code section:

/** Connect all the broadcast device. */
int lidar_count = 0;
char broadcast_code_list[kMaxLidarCount][kBroadcastCodeSize];

Remove the comment of the following code section, set the BROADCAST_CODE_LIST_SIZE and replace the broadcast code lists in the main.c for both LiDAR sample ({Livox-SDK}/sample/lidar/main.c) and Hub sample ({Livox-SDK}/sample/hub/main.c) with the broadcast code of your devices before building.

/** Connect the broadcast device in list, please input the broadcast code and modify the BROADCAST_CODE_LIST_SIZE. */
/*#define BROADCAST_CODE_LIST_SIZE  3
int lidar_count = BROADCAST_CODE_LIST_SIZE;
char broadcast_code_list[kMaxLidarCount][kBroadcastCodeSize] = {
  "000000000000002",
  "000000000000003",
  "000000000000004"
};*/

4.4 Generate the lvx file

We provide the C++ sample to generate the lvx file for hub and LiDAR unit(s). You can use the same way in 4.2.1 and 4.2.2 to run them.

4.4.1 Program Options

You can alse use the program options in 4.3.1 to connect specific device and generate the log file, and we provide two new options for lvx file:

[-t] Time to save point cloud to the lvx file.(unit: s)
[-p] Get the extrinsic parameter from standard extrinsic.xml file(The same as viewer) in the executable file's directory.(Especially for LiDAR unit(s) as the hub will calculate the extrinsic parameter by itself)

Here is the example:

./lidar_lvx_sample -c "00000000000002&00000000000003&00000000000004" -l -t 10 -p
./hub_lvx_sample -c "00000000000001" -l -t 10

5 Support

You can get support from Livox with the following methods:

  • Send email to [email protected] with a clear description of your problem and your setup
  • Github Issues

livox_detection's People

Contributors

livox-sdk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

livox_detection's Issues

Any plans to release C++ version

I want to real-time object detection.
But point cloud data preprocessing module takes a lot of time.
Do you have any plans to release C++ version ?

Problem when running "python3 setup.py develop "

I'm trying to run "python3 setup.py develop" and facing the following issue that I could not fix:

Captura de tela de 2024-02-21 15-05-50

I've installed all the dependencies listed.
I want to use de pkg with a Livox HAP LiDAR on UBUNTU 20.04.
Can you help me?

lvx2 to bag?

官网提供的LVX2实测数据,如何转换为bag。lvx可以利用driver1转换,但是driver2没有这个部分功能诶。麻烦了

have problem when using python

I create a new folder 'build' in livox detection, then run:
(base) shiyanshi@shiyanshi-System-Product-Name:~/livox_detection$ python3 setup.py develop
after a long list, I got this:
IndexError: list index out of range
how to deal wiz it?

Fusing camera

Hi, I want to fuse the camera with Mid-70 to perform 3D detection. However, there is no camera data in the Livox simulation dateset. Can other pre-trained models be directly applied to the point cloud of Livox with the camera? If it is not trivial, how should I train a fusion detection for Livox lidar?

Does it work for Livox Mid-360

@Livox-SDK I am trying with livox Mid-360 to detect people but the detection is bad. The pedestrian bounding box not tracking the people instead it stays in one place. I am wondering that it works with Mid-360. Could you help me out

object is tetected after one mitute at inference of 50ms

Hello, I tried this tool on a GTX 1050 and it stated that it has an inference time of 50ms. Although the time until an object is detected is too high (1 minute).

I also tried it with a Nvidia jetson Xavier NX but there I can only install tensorflow 1.15.4 and there are several problems with the memory allocation of tensorflow when the detection is started. I also tried to accelerate the model with tensorrt but an error accured while transforming the pb file to onnx. I got stuck there. The tool starts after about 1 to 2 minutes to detect with a inference time of 200ms.

What can I do to make it faster?
Thanks in advance

怎么使用自己的数据进行推理?

您好!我是一个新手,抱歉打扰!我想问一下如何能使用自己lvx或者pcd文件的点云数据使用这个代码呢?您的bag文件是如何生成的?能否提供源代码
谢谢!期待您的回复!

rviz cannnot init Automatically

by following the readme.md
the rviz cannot init automatically
just 3 terminals ,but the detection terminal seems good
btw, livox_mapping algorithm can init rviz well
so , why the rviz in detection part cannot init?
2020-11-18 10-23-52 的屏幕截图

Parameter Setting

Hi Thanks for your work.

May I ask what is the meaning of "VOXEL_SIZE" and "OVERLAP" ? How these two parameters affect the detection result ?

Installation - no Makefile

Hello,

Following the installation steps, under the third step, I executed the 'cmake' command (cmake -DCMAKE_BUILD_TYPE=Release ..), but no Makefile was created.
Thus, when running make I get the error
make: *** No targets specified and no makefile found. Stop.

Anyone encounter a similar issue?

Model doubts.

Hi

A couple of quick questions -

  1. Are the results you show on livox lidar data collected from real-world road scenarios, or is it simulated data (livox-simu)?
  2. Do you train this on real-world data or the livox-simu data or a mixture of both?

Thanks

Problem about the yaw angle computation

In the Lib_cpp.cpp, the yaw angle theta is computed by "theta = atan2(sin_theta, cos_theta)/2". Thus, the computed yaw angle will be in the range (-pi/2, pi/2). I want to know why the yaw angle is divided by 2???? It's confused. Thank you!!!

High run time

I tried running detection on my data as well as on sample Livox data. But unlike mentioned runtime of 24 ms, for me run time is too high (>=400ms). Anything that can be done to improve the run time. Please suggest..
run_time

missing tracking ID

Hello.

I ran this program and was able to detect people.
However, I can't find the tracking ID and I don't know where it is implemented.

Please let me know if you know where it is implemented or how to do it.

Thank you.

About 3D bounding box calculation problem

According to the calculation of the get_3d_box() function, the 8 corner coordinates of the detection box are obtained.
Why is it necessary to use the T1 matrix for a transformation to get the correct corner coordinates?
What does this T1 matrix mean?
(x,y,z) is not the reference radar coordinate system?

T1 = np.array([[0.0, -1.0, 0.0, 0.0],
               [0.0, 0.0, -1.0, 0.0],
               [1.0, 0.0, 0.0, 0.0],
               [0.0, 0.0, 0.0, 1.0]]
              )
box3d_pts_3d[:, 0:3] = self.get_3d_box((l, w, h), ry,(x,y,z))
box3d_pts_3d = np.dot(np.linalg.inv(T1), box3d_pts_3d.T).T

inference on tensorrt

i try to make a c++ version of inference, but after i trans the model to onnx and run it on tensorrt 2080ti, the inference time of each layer are as follow:
{Cast} 0.301ms
Conv/Conv2D__5 0.138ms
Conv/Conv2D 0.707ms
ReduceMean__9 132.021ms
Sub__11 0.218ms
(Unnamed Layer* 10) [ElementWise] + Redu 123.283ms
ReduceProd__30:0[Constant] 0.001ms
Cast__17 0.004ms
Div__18 0.004ms
PWN((Unnamed Layer* 24) [ElementWise], P 0.235ms
Conv/BatchNorm/Const:0 + (Unnamed Layer* 0.221ms
Conv_1/Conv2D 0.936ms
ReduceMean__23 121.545ms
Sub__25 0.217ms
(Unnamed Layer* 37) [ElementWise] + Redu 120.613ms
Cast__31 0.004ms
Div__32 0.004ms
PWN((Unnamed Layer* 51) [ElementWise], P 0.221ms
Conv/BatchNorm/Const:0_1 + (Unnamed Laye 0.224ms
MaxPool2D/MaxPool 0.140ms
Conv_2/Conv2D 0.045ms
ReduceMean__39 29.410ms
Sub__41 0.028ms
(Unnamed Layer* 65) [ElementWise] + Redu 29.799ms
ReduceProd__74:0[Constant] 0.001ms
Cast__47 0.004ms
Div__48 0.004ms
PWN((Unnamed Layer* 79) [ElementWise], P 0.031ms
Conv_2/BatchNorm/Const:0 + (Unnamed Laye 0.032ms
Conv_3/Conv2D 0.163ms
ReduceMean__53 29.614ms
Sub__55 0.055ms
(Unnamed Layer* 92) [ElementWise] + Redu 30.101ms
Cast__61 0.004ms
Div__62 0.004ms
PWN((Unnamed Layer* 106) [ElementWise], 0.057ms
Conv/BatchNorm/Const:0_4 + (Unnamed Laye 0.059ms
add 0.083ms
Conv_4/Conv2D 0.467ms
ReduceMean__67 29.748ms
Sub__69 0.112ms
(Unnamed Layer* 120) [ElementWise] + Red 29.792ms
Cast__75 0.003ms
Div__76 0.004ms
PWN((Unnamed Layer* 134) [ElementWise], 0.359ms
Conv_4/BatchNorm/Const:0 + (Unnamed Laye 0.114ms
MaxPool2D_1/MaxPool 0.072ms
Conv_5/Conv2D 0.031ms
ReduceMean__83 7.426ms
Sub__85 0.013ms
(Unnamed Layer* 148) [ElementWise] + Red 7.421ms
ReduceProd__146:0[Constant] 0.002ms
Cast__91 0.004ms
Div__92 0.004ms
PWN((Unnamed Layer* 162) [ElementWise], 0.018ms
Conv/BatchNorm/Const:0_7 + (Unnamed Laye 0.013ms
Conv_6/Conv2D 0.147ms
ReduceMean__97 7.433ms
Sub__99 0.028ms
(Unnamed Layer* 175) [ElementWise] + Red 7.622ms
Cast__105 0.004ms
Div__106 0.004ms
PWN((Unnamed Layer* 189) [ElementWise], 0.030ms
Conv_4/BatchNorm/Const:0_9 + (Unnamed La 0.032ms
add_1 0.043ms
Conv_7/Conv2D 0.031ms
ReduceMean__111 7.541ms
Sub__113 0.014ms
(Unnamed Layer* 203) [ElementWise] + Red 7.431ms
Cast__119 0.004ms
Div__120 0.004ms
PWN((Unnamed Layer* 217) [ElementWise], 0.018ms
Conv/BatchNorm/Const:0_11 + (Unnamed Lay 0.012ms
Conv_8/Conv2D 0.150ms
ReduceMean__125 7.323ms
Sub__127 0.027ms
(Unnamed Layer* 230) [ElementWise] + Red 7.427ms
Cast__133 0.003ms
Div__134 0.004ms
PWN((Unnamed Layer* 244) [ElementWise], 0.030ms
Conv_4/BatchNorm/Const:0_13 + (Unnamed L 0.031ms
add_2 0.044ms
Conv_9/Conv2D 0.466ms
ReduceMean__139 7.607ms
Sub__141 0.055ms
(Unnamed Layer* 258) [ElementWise] + Red 7.927ms
Cast__147 0.003ms
Div__148 0.004ms
PWN((Unnamed Layer* 272) [ElementWise], 0.058ms
Conv_9/BatchNorm/Const:0 + (Unnamed Laye 0.059ms
MaxPool2D_2/MaxPool 0.040ms
Conv_10/Conv2D 0.031ms
ReduceMean__155 1.785ms
Sub__157 0.009ms
(Unnamed Layer* 286) [ElementWise] + Red 1.743ms
ReduceProd__559:0[Constant] 0.001ms
Cast__163 0.004ms
Div__164 0.004ms
PWN((Unnamed Layer* 300) [ElementWise], 0.010ms
Conv_4/BatchNorm/Const:0_16 + (Unnamed L 0.007ms
Conv_11/Conv2D 0.124ms
ReduceMean__169 1.807ms
Sub__171 0.013ms
(Unnamed Layer* 313) [ElementWise] + Red 1.863ms
Cast__177 0.003ms
Div__178 0.004ms
PWN((Unnamed Layer* 327) [ElementWise], 0.019ms
Conv_9/BatchNorm/Const:0_18 + (Unnamed L 0.014ms
add_3 0.023ms
Conv_12/Conv2D 0.032ms
ReduceMean__183 1.743ms
Sub__185 0.007ms
(Unnamed Layer* 341) [ElementWise] + Red 1.743ms
Cast__191 0.003ms
Div__192 0.004ms
PWN((Unnamed Layer* 355) [ElementWise], 0.010ms
Conv_4/BatchNorm/Const:0_20 + (Unnamed L 0.006ms
Conv_13/Conv2D 0.124ms
ReduceMean__197 1.848ms
Sub__199 0.013ms
(Unnamed Layer* 368) [ElementWise] + Red 1.891ms
Cast__205 0.003ms
Div__206 0.004ms
PWN((Unnamed Layer* 382) [ElementWise], 0.020ms
Conv_9/BatchNorm/Const:0_22 + (Unnamed L 0.013ms
add_4 0.023ms
Conv_14/Conv2D 0.032ms
ReduceMean__211 1.830ms
Sub__213 0.008ms
(Unnamed Layer* 396) [ElementWise] + Red 1.743ms
Cast__219 0.004ms
Div__220 0.003ms
PWN((Unnamed Layer* 410) [ElementWise], 0.011ms
Conv_4/BatchNorm/Const:0_24 + (Unnamed L 0.007ms
Conv_15/Conv2D 0.124ms
ReduceMean__225 1.930ms
Sub__227 0.013ms
(Unnamed Layer* 423) [ElementWise] + Red 1.937ms
Cast__233 0.004ms
Div__234 0.003ms
PWN((Unnamed Layer* 437) [ElementWise], 0.020ms
Conv_9/BatchNorm/Const:0_26 + (Unnamed L 0.013ms
add_5 0.023ms
Conv_16/Conv2D 0.032ms
ReduceMean__239 1.745ms
Sub__241 0.008ms
(Unnamed Layer* 451) [ElementWise] + Red 1.743ms
Cast__247 0.004ms
Div__248 0.004ms
PWN((Unnamed Layer* 465) [ElementWise], 0.011ms
Conv_4/BatchNorm/Const:0_28 + (Unnamed L 0.007ms
Conv_17/Conv2D 0.124ms
ReduceMean__253 1.807ms
Sub__255 0.013ms
(Unnamed Layer* 478) [ElementWise] + Red 1.811ms
Cast__261 0.003ms
Div__262 0.004ms
PWN((Unnamed Layer* 492) [ElementWise], 0.019ms
Conv_9/BatchNorm/Const:0_30 + (Unnamed L 0.013ms
add_6 0.024ms
Conv_18/Conv2D 0.442ms
ReduceMean__267 1.945ms
Sub__269 0.027ms
(Unnamed Layer* 506) [ElementWise] + Red 1.910ms
Cast__275 0.003ms
Div__276 0.004ms
PWN((Unnamed Layer* 520) [ElementWise], 0.034ms
Conv_18/BatchNorm/Const:0 + (Unnamed Lay 0.031ms
MaxPool2D_3/MaxPool 0.019ms
Conv_19/Conv2D 0.037ms
ReduceMean__283 0.472ms
Sub__285 0.006ms
(Unnamed Layer* 534) [ElementWise] + Red 0.515ms
ReduceProd__500:0[Constant] 0.001ms
Cast__291 0.004ms
Div__292 0.004ms
PWN((Unnamed Layer* 548) [ElementWise], 0.007ms
Conv_9/BatchNorm/Const:0_33 + (Unnamed L 0.006ms
Conv_20/Conv2D 0.116ms
ReduceMean__297 0.509ms
Sub__299 0.008ms
(Unnamed Layer* 561) [ElementWise] + Red 0.472ms
Cast__305 0.003ms
Div__306 0.003ms
PWN((Unnamed Layer* 575) [ElementWise], 0.011ms
Conv_18/BatchNorm/Const:0_35 + (Unnamed 0.007ms
add_7 0.010ms
Conv_21/Conv2D 0.037ms
ReduceMean__311 0.472ms
Sub__313 0.005ms
(Unnamed Layer* 589) [ElementWise] + Red 0.526ms
Cast__319 0.004ms
Div__320 0.004ms
PWN((Unnamed Layer* 603) [ElementWise], 0.007ms
Conv_9/BatchNorm/Const:0_37 + (Unnamed L 0.005ms
Conv_22/Conv2D 0.116ms
ReduceMean__325 0.509ms
Sub__327 0.007ms
(Unnamed Layer* 616) [ElementWise] + Red 0.471ms
Cast__333 0.003ms
Div__334 0.004ms
PWN((Unnamed Layer* 630) [ElementWise], 0.011ms
Conv_18/BatchNorm/Const:0_39 + (Unnamed 0.006ms
add_8 0.010ms
Conv_23/Conv2D 0.037ms
ReduceMean__339 0.471ms
Sub__341 0.006ms
(Unnamed Layer* 644) [ElementWise] + Red 0.473ms
Cast__347 0.003ms
Div__348 0.004ms
PWN((Unnamed Layer* 658) [ElementWise], 0.007ms
Conv_9/BatchNorm/Const:0_41 + (Unnamed L 0.005ms
Conv_24/Conv2D 0.116ms
ReduceMean__353 0.510ms
Sub__355 0.008ms
(Unnamed Layer* 671) [ElementWise] + Red 0.472ms
Cast__361 0.003ms
Div__362 0.004ms
PWN((Unnamed Layer* 685) [ElementWise], 0.010ms
Conv_18/BatchNorm/Const:0_43 + (Unnamed 0.006ms
add_9 0.010ms
Conv_25/Conv2D 0.037ms
ReduceMean__367 0.472ms
Sub__369 0.005ms
(Unnamed Layer* 699) [ElementWise] + Red 0.538ms
Cast__375 0.003ms
Div__376 0.003ms
PWN((Unnamed Layer* 713) [ElementWise], 0.008ms
Conv_9/BatchNorm/Const:0_45 + (Unnamed L 0.005ms
Conv_26/Conv2D 0.117ms
ReduceMean__381 0.509ms
Sub__383 0.007ms
(Unnamed Layer* 726) [ElementWise] + Red 0.472ms
Cast__389 0.003ms
Div__390 0.004ms
PWN((Unnamed Layer* 740) [ElementWise], 0.010ms
Conv_18/BatchNorm/Const:0_47 + (Unnamed 0.006ms
add_10 0.010ms
Conv_27/Conv2D 0.037ms
ReduceMean__395 0.472ms
Sub__397 0.006ms
(Unnamed Layer* 754) [ElementWise] + Red 0.472ms
Cast__403 0.003ms
Div__404 0.004ms
PWN((Unnamed Layer* 768) [ElementWise], 0.007ms
Conv_9/BatchNorm/Const:0_49 + (Unnamed L 0.005ms
Conv_28/Conv2D 0.116ms
ReduceMean__409 0.510ms
Sub__411 0.007ms
(Unnamed Layer* 781) [ElementWise] + Red 0.472ms
Cast__417 0.003ms
Div__418 0.004ms
PWN((Unnamed Layer* 795) [ElementWise], 0.010ms
Conv_18/BatchNorm/Const:0_51 + (Unnamed 0.006ms
add_11 0.010ms
Conv_29/Conv2D 0.038ms
ReduceMean__423 0.604ms
Sub__425 0.006ms
(Unnamed Layer* 809) [ElementWise] + Red 0.472ms
Cast__431 0.003ms
Div__432 0.003ms
PWN((Unnamed Layer* 823) [ElementWise], 0.007ms
Conv_9/BatchNorm/Const:0_53 + (Unnamed L 0.005ms
Conv_30/Conv2D 0.116ms
ReduceMean__437 0.509ms
Sub__439 0.008ms
(Unnamed Layer* 836) [ElementWise] + Red 0.472ms
Cast__445 0.003ms
Div__446 0.004ms
PWN((Unnamed Layer* 850) [ElementWise], 0.011ms
Conv_18/BatchNorm/Const:0_55 + (Unnamed 0.006ms
add_12 0.010ms
Conv_31/Conv2D 0.052ms
ReduceMean__451 0.472ms
Sub__453 0.007ms
(Unnamed Layer* 864) [ElementWise] + Red 0.472ms
Cast__459 0.003ms
Div__460 0.004ms
PWN((Unnamed Layer* 878) [ElementWise], 0.010ms
Conv_18/BatchNorm/Const:0_57 + (Unnamed 0.006ms
Conv_32/Conv2D 0.433ms
ReduceMean__465 0.525ms
Sub__467 0.013ms
(Unnamed Layer* 891) [ElementWise] + Red 0.530ms
Cast__473 0.004ms
Div__474 0.004ms
PWN((Unnamed Layer* 905) [ElementWise], 0.021ms
Conv_32/BatchNorm/Const:0 + (Unnamed Lay 0.152ms
Conv_33/Conv2D 0.095ms
ReduceMean__479 0.472ms
Sub__481 0.007ms
(Unnamed Layer* 918) [ElementWise] + Red 0.473ms
Cast__487 0.003ms
Div__488 0.004ms
PWN((Unnamed Layer* 932) [ElementWise], 0.010ms
Conv_18/BatchNorm/Const:0_60 + (Unnamed 0.006ms
Conv_34/Conv2D 0.037ms
ReduceMean__493 0.472ms
Sub__495 0.005ms
(Unnamed Layer* 945) [ElementWise] + Red 0.473ms
Cast__501 0.003ms
Div__502 0.004ms
PWN((Unnamed Layer* 959) [ElementWise], 0.007ms
Conv_9/BatchNorm/Const:0_62 + (Unnamed L 0.005ms
Resize__505 0.024ms
Resize__505:0 copy 0.010ms
Conv_35/Conv2D 0.083ms
ReduceMean__510 1.743ms
Sub__512 0.013ms
(Unnamed Layer* 974) [ElementWise] + Red 1.979ms
Cast__518 0.003ms
Div__519 0.004ms
PWN((Unnamed Layer* 988) [ElementWise], 0.019ms
Conv_9/BatchNorm/Const:0_64 + (Unnamed L 0.014ms
Conv_36/Conv2D 0.442ms
ReduceMean__524 1.898ms
Sub__526 0.027ms
(Unnamed Layer* 1001) [ElementWise] + Re 1.857ms
Cast__532 0.003ms
Div__533 0.004ms
PWN((Unnamed Layer* 1015) [ElementWise], 0.034ms
Conv_18/BatchNorm/Const:0_66 + (Unnamed 0.031ms
Conv_37/Conv2D 0.082ms
ReduceMean__538 1.913ms
Sub__540 0.013ms
(Unnamed Layer* 1028) [ElementWise] + Re 1.810ms
Cast__546 0.003ms
Div__547 0.004ms
PWN((Unnamed Layer* 1042) [ElementWise], 0.020ms
Conv_9/BatchNorm/Const:0_68 + (Unnamed L 0.013ms
Conv_38/Conv2D 0.224ms
ReduceMean__552 1.805ms
Sub__554 0.014ms
(Unnamed Layer* 1055) [ElementWise] + Re 1.807ms
Cast__560 0.004ms
Div__561 0.006ms
PWN((Unnamed Layer* 1069) [ElementWise], 0.019ms
Conv_9/BatchNorm/Const:0_70 + (Unnamed L 0.013ms
Conv_39/BiasAdd 0.052ms
Conv_39/BiasAdd__563 0.007ms
Time over all layers: 826.180
the most time-consuming layer is always reduce_mean, how to get 20 fps when inference, thx

Error while using live LIDAR data from Mid 70 lidar

We are using a Livox Mid 70 lidar and trying to run the livox_detection package.

it works perfectly with the provided ros bag. but when we try to run the same program with the live LIDAR data, we see the follow errors: IndexError: too many indices for array on line 197, in LivoxCallback

[ERROR] [1658247201.950686]: bad callback: <bound method Detector.LivoxCallback of <__main__.Detector object at 0x7f6f3226cf10>>
Traceback (most recent call last):
  File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "livox_rosdetection.py", line 197, in LivoxCallback
    pointcloud_msg = pcl2.create_cloud_xyz32(header, points_list[:, 0:3])
IndexError: too many indices for array

any help would be appreciated.

Unable to change voxel size

I get error when I try to use any voxel size other than provided in config (0.2,0.2,0.2).

/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
2016 448 60
(1, 2016, 448, 60)
WARNING:tensorflow:From /home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2021-12-30 13:31:07.269448: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2021-12-30 13:31:07.292682: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3699850000 Hz
2021-12-30 13:31:07.293681: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x3527730 executing computations on platform Host. Devices:
2021-12-30 13:31:07.293718: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
WARNING:tensorflow:From /home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
Traceback (most recent call last):
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3,3,60,64] rhs shape= [3,3,30,64]
[[{{node save/Assign_3}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 1276, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3,3,60,64] rhs shape= [3,3,30,64]
[[node save/Assign_3 (defined at livox_rosdetection.py:64) ]]

Caused by op 'save/Assign_3', defined at:
File "livox_rosdetection.py", line 321, in
livox = Detector()
File "livox_rosdetection.py", line 64, in init
saver = tf.train.Saver()
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 832, in init
self.build()
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 844, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 881, in _build
build_save=build_save, build_restore=build_restore)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 513, in _build_internal
restore_sequentially, reshape)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 354, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saving/saveable_object_util.py", line 73, in restore
self.op.get_shape().is_fully_defined())
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/ops/state_ops.py", line 223, in assign
validate_shape=validate_shape)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/ops/gen_state_ops.py", line 64, in assign
use_locking=use_locking, name=name)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1801, in init
self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [3,3,60,64] rhs shape= [3,3,30,64]
[[node save/Assign_3 (defined at livox_rosdetection.py:64) ]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "livox_rosdetection.py", line 321, in
livox = Detector()
File "livox_rosdetection.py", line 70, in init
saver.restore(self.sess, cfg.MODEL_PATH)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 1312, in restore
err, "a mismatch between the current graph and the graph")
tensorflow.python.framework.errors_impl.InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Assign requires shapes of both tensors to match. lhs shape= [3,3,60,64] rhs shape= [3,3,30,64]
[[node save/Assign_3 (defined at livox_rosdetection.py:64) ]]

Caused by op 'save/Assign_3', defined at:
File "livox_rosdetection.py", line 321, in
livox = Detector()
File "livox_rosdetection.py", line 64, in init
saver = tf.train.Saver()
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 832, in init
self.build()
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 844, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 881, in _build
build_save=build_save, build_restore=build_restore)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 513, in _build_internal
restore_sequentially, reshape)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 354, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/training/saving/saveable_object_util.py", line 73, in restore
self.op.get_shape().is_fully_defined())
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/ops/state_ops.py", line 223, in assign
validate_shape=validate_shape)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/ops/gen_state_ops.py", line 64, in assign
use_locking=use_locking, name=name)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/home/bhaskar/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1801, in init
self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Assign requires shapes of both tensors to match. lhs shape= [3,3,60,64] rhs shape= [3,3,30,64]
[[node save/Assign_3 (defined at livox_rosdetection.py:64) ]]

Error when run py file

e7332e1e696e45867870a95050f8adc
all process has been done but when I try

python livox_rosdetection.py

it report this error, any idea?

Sample Rosbag

Could you please provide the Rosbag you prepared the demos with?

livox_detection iou3d_nms_cuda error

Hello

To whom it may concern.

I am trying to run the package livox_detection on Ubuntu Tegra 20.04 with ROS1.

I got the following issue.

When I run the command python3 test_ros.py --pt ../pt/livox_model_1.pt under tools I get the following error:

ImportError: cannot import name 'iou3d_nms_cuda' from 'livoxdetection.ops.iou3d_nms' (unknown location)

I have no clue how to solve that issue.

Would you mind helping me, thank you.

Inquiry into HAP SLAM

Hi. I get to this repo from link given under your youtube video. I'm wondering if the SLAM framework for HAP is released. Or we could use the previously released horizon_highway_slam with HAP?

Tracking Functionality ?

Hi ! Thanks for your work!
It is amazing !

Is there any to track certain object with ID number ?

Thanks.
Best Regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.