Custom Secondary Network

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) ** Nano
• DeepStream Version6.0.1
**• JetPack Version (valid for Jetson only)**4.6.1
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

So I have first trained InceptionResnetV2 model of Keras for binary classification. Later I converted that model into a tflite model. Then I used tf2onnx converter to convert my tflite into onnx. Finally, I converted this onnx into tensorrt engine. I’m using this tensorrt engine for inference on the deepstream-test2 python app. Although the app is running without any errors it is not displaying any output of the sgie on the video. Does deepstream does not support the model I’ve used ? Do I need to make changes to the output layer? Any help would be really appreciated.

you can test your model first, then test deepstream-test2. please refer to DeepStream SDK FAQ - #25 by fanzh 19. [DSx_All_App] How to use classification model as pgie? , it is a sample to test classification model.

This is the error I’m getting.

gst-launch-1.0 filesrc location=/home/apxnano3/Desktop/tensorrt_face/data/test.jpg ! jpegdec ! videoconvert ! video/x-raw,format=I420 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=./dstest_appsrc_config.txt ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA ! nvdsosd ! nvvideoconvert ! video/x-raw,format=I420 ! jpegenc ! filesink location=out.jpg
Unknown or legacy key specified ‘is-classifier’ for group [property]
Setting pipeline to PAUSED …
0:00:51.105947846 24378 0x5579b98000 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/apxnano3/Documents/converting_trt/gender.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT serving_default_input_1:0 3x160x160
1 OUTPUT kFLOAT StatefulPartitionedCall:0 1

0:00:51.106846098 24378 0x5579b98000 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/apxnano3/Documents/converting_trt/gender.engine
0:00:51.271711691 24378 0x5579b98000 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:./dstest_appsrc_config.txt sucessfully
Pipeline is PREROLLING …
ERROR: from element /GstPipeline:pipeline0/GstJpegDec:jpegdec0: No valid frames decoded before end of stream
Additional debug info:
gstvideodecoder.c(1161): gst_video_decoder_sink_event_default (): /GstPipeline:pipeline0/GstJpegDec:jpegdec0:
no valid frames found
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to NULL …
Freeing pipeline …

from the error “No valid frames decoded before end of stream”, test.jpg decode failed, you can use “gst-launch-1.0 filesrc location=/home/apxnano3/Desktop/tensorrt_face/data/test.jpg ! jpegdec ! fakesink” to check. y if test.jpg is ok, please share this jpg.

test
this is image test.jpg

The above command returns the same error:
ERROR: from element /GstPipeline:pipeline0/GstJpegDec:jpegdec0: No valid frames decoded before end of stream

please use a zip file to share because I get a png picture, it seemed this jpg is not supported by jpegdec.

I tried with another image and the command worked. Deepstream-test2(python app) works perfectly with the default configuration but when I change the configuration for my model it doesn’t provide the classifier’s output.

please modify the configuration file according to your model 's preprocess and postprocess. did you use gst-launch command test your model? can the above whole command give the right output?

################################################################################

[property]
gpu-id=0
net-scale-factor=1

onnx-file=/home/apxnano3/Documents/converting_trt/gender_nchw.onnx
model-engine-file=/home/apxnano3/Documents/converting_trt/gender.engine

batch-size=1

0=FP32 and 1=INT8 mode

network-mode=0
input-object-min-width=160
input-object-min-height=160
process-mode=2
model-color-format=0
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
#operate-on-class-ids=2
is-classifier=1
output-blob-names=StatefulPartitionedCall:0
classifier-async-mode=1
classifier-threshold=0.51

The above is the configuration I’m using for my model
To answer your questions

  1. yes I used the gst-launch command to test my model.
  2. The command worked fine and gave me the correct output.

The following is the configuration I used with the gst-launch command and it worked fine:

[property]
gpu-id=0
onnx-file=/home/apxnano3/Documents/converting_trt/gender_nchw.onnx
model-engine-file=/home/apxnano3/Documents/converting_trt/gender.engine
network-type=1
input-object-min-width=160
input-object-min-height=160
model-color-format=1
#gie-unique-id=2
#operate-on-gie-id=1
#operate-on-class-ids=0
#classifier-async-mode=1
#classifier-threshold=0.51

force-implicit-batch-dim=1
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=StatefulPartitionedCall:0
#scaling-filter=0
#scaling-compute-hw=0
cluster-mode=2
is-classifier=1

[class-attrs-all]
pre-cluster-threshold=0.2
topk=20
nms-iou-threshold=0.5

Can you please suggest what changes should I make to my configuration file?

  1. if “The command worked fine and gave me the correct output”, your model is supported by deepstream.
  2. dose your model used to recognize people? operate-on-class-ids should be 2, please refer to deepstream\deepstream\samples\models\Primary_Detector\labels.txt

lost net-scale-factor value, try set it

I was able to run the gst-launch command. But when I tried using the same model in the deepstream app it doesn’t work.

I made changes to my configuration file as suggested by @fanzh. The updated config is as follows:

[property]
gpu-id=0
onnx-file=/home/apxnano3/Documents/converting_trt/gender_nchw.onnx
model-engine-file=/home/apxnano3/Documents/converting_trt/gender.engine
labelfile-path=/home/apxnano3/Documents/converting_trt/labels.txt
force-implicit-batch-dim=1
batch-size=1

0=FP32 and 1=INT8 mode

network-mode=1
input-object-min-width=160
input-object-min-height=160
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=2
is-classifier=1
num-detected-classes=2
output-blob-names=StatefulPartitionedCall:0
classifier-async-mode=1
classifier-threshold=0.51
#process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

But still it is not working. Btw I’m using peoplenet as pgie and using my model to classify the gender on the detected faces.

  1. did you still need support? what is your input source? about the second model, please try
    classifier-async-mode=0.
    process-mode=2
  2. nvinfer is opensource, please add logs to debug, please refer to DeepStream SDK FAQ - #9 by mchi to check if input tensor in the second model is ok.
  3. if still not work, please provide your simplified code ,model, configuration file to reproduce this issue.
  1. My input source is an h264 video

  2. In the link that you have provided, am I suppose to add the provided file in this " /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/ " directory?

no, need to port the modification of dump_infer_input_to_file.patch.txt, then build and replace the old so, by this method you can check if input tensor in the second model is ok.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.