Bad classification result using deepstream 3.0 samples on Xavier

Hi all, I’m new to using Deepstream 3.0 on Jetson Xavier. After running the samples by commands “deepstream-app -c source30_720p_dec_infer-resnet_tiled_display_int8.txt” and “deepstream-app -c source4_720p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt”, I found that the classification result on the provided video “streams/sample_720p.mp4” is extremely bad. I output to the “ouput.mp4” file to see the results. I can only get classification results by reduce the threshold in “config-infer_primary.txt” below 0.001, where several borders appear with nearly no classification effects. All the borders are static without any movement and changes all the time. Is there anybody facing such problem? Is this because of the provided default model is not good? Or I have some wrong settings? Thank you so much!

Hi,

We can get a good result with the config: source30_720p_dec_infer-resnet_tiled_display_int8.txt.
Could you check it again?

The only modification we made is to turn on the video recording:

[sink1]
enable=<b>1</b>
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
sync=0
bitrate=2000000
output-file=out.mp4
source-id=0

Thanks.

Hi, thank you so much for your reply!!
When I run the samples, the following warning appear:

(gst-plugin-scanner:17299): GStreamer-WARNING **: 09:22:13.110: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libcluttergst3.so’: /usr/lib/aarch64-linux-gnu/libgbm.so.1: undefined symbol: drmGetDevice2

(gst-plugin-scanner:17299): GStreamer-WARNING **: 09:22:14.236: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstkms.so’: /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstkms.so: undefined symbol: drmModeGetFB

(gst-plugin-scanner:17299): GStreamer-WARNING **: 09:22:14.542: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstopengl.so’: /usr/lib/aarch64-linux-gnu/libgbm.so.1: undefined symbol: drmGetDevice2

(gst-plugin-scanner:17299): GStreamer-WARNING **: 09:22:14.553: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstopenglmixers.so’: /usr/lib/aarch64-linux-gnu/libgbm.so.1: undefined symbol: drmGetDevice2

Do these warning matter? Also, I use the JetPack 4.1.1 with tensorRT 5.0.3, which should be okay.

Moreover, the following warning appears when the first time I run the sample. It seems the model is not opened??
Warning. Could not open model engine file /home/ubuntu/deepstream_sdk_on_jetson/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b30_int8.engine

Thank you so much!!

Hi all, I found the above warnings are solved after I relink the librdm.so.2 to libdrm.so.2.4.0. Actually these warnings seems okay and not critical.
The thing is that I found the “libgstnvinfer.so” has problems in my system. After replace the “nvinfer” element by “nvyolo” element (from the YOLO example on Github) in the Deepstream source code, finally the samples can be run successfully with good classification results.
So, I think in mysystem, the “nvinfer” componenet which is handled by the plugin “libgstnvinfer.so” has some problem. I wonder where this “libgstnvinfer.so” plugin comes from? From TensorRT or from Deepstream? How can I find the source code of it? Or how can I replace or update it?
Thank you so much!!

Hi,

libgstnvinfer.so is included in DeepStream SDK and doesn’t open source.
To update it, please try to reinstall the Deepstream package.

Thanks.

Hi AastaLLL,
Thank you so much for your help!! The problem is that now I still cannot get the “nvinfer” perform the right functionality. Although the application can run, but there is no classification results from “nvinfer”. From the profiling using nvprof, I found maybe the data movement between CPU and GPU has some problems. From the profiling, I can only see the memory copy from GPU to CPU after each inference, but there is no memory copy from CPU to GPU before each inference. Does this mean that the required input data isn’t fed to GPU? Could this be the reason that I always cannot get the classification results? And I tried many examples such as “fasterRCNN”, “yolo” using “nvinfer”. As long as I use “nvinfer”, the classification won’t happen. This confuses me a lot.
Thank you so much for your help!!

Hi,

Which model do you use?
If you are using a customized model, please update the parser based on your output format.

Thanks.

I have tried the provided ResNet model in the default sample configurations. But it never gives the right classification results. Currently, I use the Yolo model with the new “nvyolo” plugin and it works. I believe the provided “nvinfer” plugin of the “libgstnvinfer.so” in my environment has some problem. But it’s not open-source and I cannot figure out the problem. Thank you!!

It’s good to know you have an alternative now.
Thank for the update.