Pre-processing input image with Custom object model detection

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU RTX 2060
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.0.0
• NVIDIA GPU Driver Version (valid for GPU only) 460
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am trying to implement custom SSD ONNX model with Deepstream, I have specified a parse function name in config file property parse-bbox-func-name
I want to do some pre processing steps on the input image for my model to work correctly. Like - resizing the image, subtracting means from the image, transposing it, etc. Since the call to my custom code is after the ONNX model has already parsed the image, I am not sure how to do it.
How to achieve the pre-processing in this case? such that before passing the image to model it is changed in required manner?

The gst-nvinfer plugin has already support some pre-processing (resize, normalization with mean, color format transformation,…). Can you tell us what kind of “transposing” do you need?

Please refer to the nvinfer config file /opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/config_infer_primary_ssd.txt, it is our ssd model sample, most pre-processing you mentioned have been included in it.

Hi @Fiona.Chen ,
When I try to run that example, I get following error -
/opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models$ deepstream-app -c config_infer_primary_ssd.txt
** ERROR: main:655: Failed to set pipeline to PAUSED
ERROR from src_bin_muxer: Output width not set
Debug info: gstnvstreammux.c(2283): gst_nvstreammux_change_state (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer
App run failed

config_infer_primary_ssd.txt is nvinfer config file but not deepstream-app config file. Please edit deepstream_app_source1_detection_models.txt, go to the [primary-gie] part, remove the line of “config-file=config_infer_primary_frcnn.txt” and enable “config-file=config_infer_primary_ssd.txt”. Then save the file.

You can run the command line :
deepstream-app -c deepstream_app_source1_detection_models.txt

Please read DeepStream Reference Application - deepstream-app — DeepStream 5.1 Release documentation and understand the parameters in the config file.

Thank you @Fiona.Chen
Where can I download pretrained ssd/ssd_resnet18.etlt model for this example?

Can you read the /opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/README file carefully?