LidarSocket::readPacketsFromNet, queues full, loosing packets

Hello everybody! I have a problem with the occupancy grid sample in combination with the Velodyne VLP16 Puck.

So briefly, I simply integrated the sample into ROS which works fine together with the Sample Data with no Errors and everything works fine. Then I initialized the sample with the lidar and it also works with live data, however after about 20 seconds I get the following message in the terminal: “LidarSocket::readPacketsFromNet, queues full, losing packets, 192.168.1.220”. The sample still runs fine, though I notice some irregularities in the rendering after 20 seconds, in general, the visualization is much slower than the original sample, as you can see the scan flows around as it sweeps. Could this be because I only use about 400 points?

In the video which is the zip-file you can see:

the initialization of the sensor
Running the sample
after about 20 seconds the error

I will also include some pictures.

Maybe someone knows the solution to this problem or can give some advice that I would appreciate.

simplescreenrecorder-2020-10-29_19.40.34.mp4.tar.gz (2.4 MB)

Dear @zml-koop,
Do you see any issue with sample_lidar_replay with lidar recorded data? Also, could you please provide sample repro code as well?

Dear @SivaRamaKrishnaNV,

I just tested the sample_lidar_replay with the recorded data of the sample as well as with live data from my sensor without ROS integration and afterward the process again within the ROS environment. Everything runs without problems. As source code, I only upload the executable file, since I didn’t make any other changes in any other files and therefore everything is NVIDIA original source code. I assume that you have the necessary files, if not I can upload them. For better navigation through the source code please read the description at the beginning of the file.

Thank you very much for the help i appreciate that.

repro_code_occupancy_sample.cpp (31.2 KB)

Dear @zml-koop,
Thank you for sharing the code. We will check the issue and get back to you.

Dear @zml-koop,
I checked the changes w.r.t original sample code. Can I know why Change 3 is needed? Do you notice the same without it?

Dear @SivaRamaKrishnaNV,
Thank you for checking the code.
The reason for change #3 is based on the fact that the condition (lastLidarTime < currentImageTime) is never fulfilled because the timestamp of my lidar (live data) never becomes smaller than the currentImageTime (see the picture inside shell) and therefore the while loop is only passed once through the condition (lastLidarTime = 0). Accordingly, only one single calculation is visible in the visualization (see picture). In addition, the error still appears after 20 seconds independent from change #3.

As selection for a timestamp from the nextPacket are:

  • hostTimestamp (default)
  • sensorTimestamp

When using the sensorTimestamp the while loop (change #3) is not exited until the condition (lastLidarTime < currentImageTime) is no longer fulfilled and this takes a very long time due to the large difference (see picture).

NVIDIA_scrennshot_2

Do you have a solution or an idea?
Thank you very much for your help.

Dear @SivaRamaKrishnaNV

In the meantime I was able to find out the following. This error message has something to do with the number of packets per spin. The comparison between the data sets delivers:
Sample data → PacketsperSpin = 1
Live Lidar data → PacketsperSpin = 76

And so my guess is that some “packet memory que” is filling up faster from the Occupancy grid sample and therefore the error message appears.

For this reason I performed the following test:
By changing the return type in the VLP-16 webserver interface from “single return” to “dual return” the number of packets per spin doubles and accordingly the “packet memory que” fills up twice as fast and the error message appears earlier.

Is there a possibility (a function) with which I can regulate the PacketsperSpin. Or is there a way to adjust this “memory” of the Occupancy grid sample ?
I am very grateful for every hint