Running the objectDetector_SSD sample on TX2 with DeepStream 4.0

Hi all,
I am trying to run the objectDetector_SSD sample on Jetson TX2 with JetPack 4.2.1 and DeepStream 4.0. I followed the instructions closely apart from the frozen graph to uff conversion part, since the instructions mention python 2 but the recent tensorflow-gpu package is for python 3 (I installed tensorflow by following the instructions on Jetson Zoo).

Both deepstream-app and gst-launch-1.0 crash on startup - see below for some details and a more detailed output in https://rentry.co/tfky7.

Any ideas what am I missing? Thanks!

$ gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 ! \
>         decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 \
>         height=720 ! nvinfer config-file-path= config_infer_primary_ssd.txt ! \
>         nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink

(gst-plugin-scanner:16115): GLib-GObject-WARNING **: 18:03:55.671: cannot register existing type 'GstInterpolationMethod'

(gst-plugin-scanner:16115): GLib-GObject-CRITICAL **: 18:03:55.671: g_param_spec_enum: assertion 'G_TYPE_IS_ENUM (enum_type)' failed

(gst-plugin-scanner:16115): GLib-GObject-CRITICAL **: 18:03:55.671: validate_pspec_to_install: assertion 'G_IS_PARAM_SPEC (pspec)' failed
Setting pipeline to PAUSED ...

Using winsys: x11 
Creating LL OSD context new
0:00:09.240571280 16114   0x55bb1bbe40 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]:checkEngineParams(): Could not find output layer 'MarkOutput_0' in engine
Pipeline is PREROLLING ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Creating LL OSD context new
Could not find NMS layer buffer while parsing
0:00:09.721346597 16114   0x55badd14f0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]:fillDetectionOutput(): Failed to parse bboxes using custom parse function
Caught SIGSEGV

Hi,

Thanks for reporting this issue.
We will check this issue and will update more information with you later.

For the initial suggestion, Have you followed the steps shared in /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD/README to setup the sample?

Thanks.

Thanks for the quick response.

Yes, I followed the steps in the README file closely with one exception: I used python 3 to convert the frozen pb into uff, because tensorflow-gpu for JetPack 4.2.1 is built for python 3. The README file refers to https://elinux.org/Jetson_Zoo#TensorFlow for tensorflow installation instructions, which I followed as well.

So step 4 in the README is slightly different in JetPack 4.2.1:

$ python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py \
    frozen_inference_graph.pb -O NMS \
    -p /usr/src/tensorrt/samples/sampleUffSSD/config.py \
    -o sample_ssd_relu6.uff

I uploaded the uff file to NullUpload.com - We don't track you - v0.9alpha (crc32 of sample_ssd_relu6.uff.tar.bz2 is d73d48ea)

Hi,

We cannot reproduce this issue on our environment. The pipeline works correctly without broken.

$ gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 ! decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path= config_infer_primary_ssd.txt ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
$ deepstream-app -c deepstream_app_config_ssd.txt

Did you run the pipeline remotely or directly on the TX2?
Have you copied the ssd_coco_labels.txt to the directory?

Thanks.

Not sure what exactly was the problem, but after rebooting and starting everything from scratch it works now.

I do recommend that you update step 4 in the README file to include python 3 instructions instead of / in addition to python 2:

$ python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py \
    frozen_inference_graph.pb -O NMS \
    -p /usr/src/tensorrt/samples/sampleUffSSD/config.py \
    -o sample_ssd_relu6.uff

Thanks again for the support.

Hi,

YES. We also use python3 for the uff conversion.
Guess that you may have a broken GStreamer cache and reboot fixes the issue.

Good to know it works now : )
Thanks.