Problem about objectDetector_SSD

I’m trying to use my own ssd model to run config file “deepstream_app_config_ssd.txt” through the “deepstream-app”. And I have serialized the model into engine file through TRT. But seems there are some problems.

P4:~/Downloads/DeepStream_Release/sources/objectDetector_SSD$ deepstream-app -c deepstream_app_config_ssd.txt

** (deepstream-app:33246): CRITICAL **: gst_ffmpeg_cfg_set_property: assertion ‘qdata->size == sizeof (gint64)’ failed

Using TRT model serialized engine /home/hite/Downloads/DeepStream_Release/sources/objectDetector_SSD/serialized_engine.engine crypto flags(0)

Runtime commands:
h: Print this help
q: Quit

    p: Pause
    r: Resume

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:98>: Pipeline ready

** INFO: <bus_callback:84>: Pipeline running

Could not find NMS layer buffer while parsing
ERROR from primary_gie_classifier: Infer operation failed
Debug info: gstnvinfer.c(781): gst_nvinfer_inference_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
Could not find NMS layer buffer while parsing
ERROR from primary_gie_classifier: Infer operation failed
Debug info: gstnvinfer.c(781): gst_nvinfer_inference_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
Quitting
Could not find NMS layer buffer while parsing
Could not find NMS layer buffer while parsing
Could not find NMS layer buffer while parsing
Could not find NMS layer buffer while parsing
Could not find NMS layer buffer while parsing
Could not find NMS layer buffer while parsing
Could not find NMS layer buffer while parsing
ERROR from primary_gie_classifier: Infer operation failed
Debug info: gstnvinfer.c(781): gst_nvinfer_inference_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Infer operation failed
Debug info: gstnvinfer.c(781): gst_nvinfer_inference_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Infer operation failed
Debug info: gstnvinfer.c(781): gst_nvinfer_inference_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Infer operation failed
Debug info: gstnvinfer.c(781): gst_nvinfer_inference_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Infer operation failed
Debug info: gstnvinfer.c(781): gst_nvinfer_inference_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Infer operation failed
Debug info: gstnvinfer.c(781): gst_nvinfer_inference_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Infer operation failed
Debug info: gstnvinfer.c(781): gst_nvinfer_inference_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
Could not find NMS layer buffer while parsing
Could not find NMS layer buffer while parsing
Could not find NMS layer buffer while parsing
Could not find NMS layer buffer while parsing
App run failed

BTW, my serialized engine is built from caffe model. Would this be the problem that cause this error?

Did you try the default model and successfully ?

Did you try this command:

$ gst-launch-1.0 filesrc location=../../samples/streams/sample_720p.mp4 ! \
        decodebin ! nvinfer config-file-path= config_infer_primary_ssd.txt ! \
        nvvidconv ! nvosd ! nveglglessink

@ChrisDing
I have already run the config_infer_primary_ssd.txt successfully with the default uff model. But my own model was trained in Caffe platform.

Would it possible to use the caffe-ssd model directly in deespstream3.0-SSD-Plugin, instead of using uff format model?

Does your network has NMS layer? If not, could you enable NMS layer and try again?

It does have nms function but it’s a parameter in “detection_out layer” in caffe platform.
20190219153707.png

Would it possible to use the caffe-ssd model directly in deespstream3.0-SSD-Plugin, instead of using uff format model? ==> yes, you could refer to other config file in DeepStream package that uses caffe model.

The nms in you attached png is not a layer, regarding the ssd network, you could refer to the sampleUffSSD in TensorTR package.

Thanks!

Can you share the config files? we have trained a ResNet 10 SSD model