NasNet-based model ingestion into DeepStream

Goal: Receiving input as a RTSP stream, and running inference using a custom NasNet-based model (Tensorflow model) within DeepStream

Background on the model:

  • Inputs: jpeg → numpy ndarrays → tensors used in tensorflow
  • Outputs: confidence masks, classification masks, and vector masks used in non-max suppression calculations

What has been tried so far:

  • Using python TensorRT api, have parsed ONNX format of the model, and is able to generate a serialized .engine file

  • Tried creating uff file first, but found 2 tensorrt noncompatible layers (FusedBatchNormV3 and AddV2)

  • When using C++ TensorRT api, reached seg faults when trying to generate a .engine file

  • Created config file (deepstream_app_config_mec.txt) for RTSP streaming (H264 encoded), with paths to onnx model and engine file

  • Created config file (config_infer_mec.txt) for nvinfer detailing object labels and model paths

Points of confusion:

  • If I want to add additional preprocessing (in lieu of the preprocessing that deepstream already performs), where should this be implemented?
  • When the model was converted to ONNX, the ONNX parser did not flag these layers; is it still necessary to create custom IPlugin layers?

deepstream_app_config_mec.txt
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=1
width=640
height=480
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=4
uri=rtsp://192.168.1.251:8554/mystream
num-sources=1
gpu-id=0
cudadec-memtype=0

[streammux]
gpu-id=0
live-source=1
batch-size=1
batched-push-timeout=40000
width=640
height=480
enable-padding=0
nvbuf-memory-type=0

[sink0]
enable=1
type=2
sync=1
source-id=0
gpu-id=0

[sink1]
enable=0
type=3
container=1
codec=1
sync=0
bitrate=2000
output-file=out.mp4
source-id=0

[sink2]
enable=0
type=4
codec=1
sync=0
bitrate=2000
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
batch-size=1
gie-unique-id=1
interval=0
labelfile-path=mec_labels.txt
config-file=config_infer_mec.txt
nvbuf-memory-type=0
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1

config_infer_mec.txt
[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
model-color-format=0
onnx-file=mec_model_d2.onnx
labelfile-path=mec_labels.txt
batch-size=1
network-mode=0
num-detected-classes=2
interval=0
gie-unique-id=1
is-classifier=0
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so

[class-attrs-all]
threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

mec_labels.txt
traffic_sign
person

Hi,

Would you mind to share your model for us checking.

Our parser doesn’t support some new layer like FusedBatchNormV3.
But this layer is similar to the original FusedBatchNorm op and can be forward to use FusedBatchNorm in an uff format.

If you can replace warning operation with a supported one, plugin implementation is not necessary.

More, may I know which platform do you use?

Thanks.

Hi,

Just wanted to let you know that I’ve sent a direct message with more context into what I have tried so far. Is there an email that you can share, for me to share a .pb file of the model?

Thanks for the support!

Please follow the update in the direct message.
Thanks.

Continuing the discussion from NasNet-based model ingestion into DeepStream:

Hi AastaLLL, I work with Qsu who asked this question initially. We are following up on this thread to see if DS 5.0 solves the issue about not being able to create a TensorRT engine for our model due to the padding layers?
From what I understand, DS 5.0 made updates to support more functionality from TensorFlow regarding the file format. Did this also fix the layer compatibility so that DS 5.0 or TensorRT could take in our NasNet model. Please advise. Thanks!

Hi,

Deepstream use TensorRT as backend engine.
In JetPack4.4, TensorRT package is upgraded into v7.1 and do add several layer support.

I will give it a try later.
Thanks.

Thank you, Aasta. Please let me know if you find that the NasNet-based model is compatible now with DS 5.0 / the upgraded TensorRT v7.1? Feel free to PM me.

Hi,

Thanks for your patience.

Just test your NasNet with TensorRT7.1 but meet the same error again:

----------------------------------------------------------------
Input filename:   mec_model_d2.onnx
ONNX IR version:  0.0.6
Opset version:    8
Producer name:    tf2onnx
Producer version: 1.5.5
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[06/05/2020-05:16:12] [W] [TRT] onnx2trt_utils.cpp:217: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: builtin_op_importers.cpp:2160 In function importPad:
[8] Assertion failed: convertOnnxPadding(onnxPadding, &begPadding, &endPadding) && "This version of TensorRT only supports padding on the outer two dimensions!"
[06/05/2020-05:16:12] [E] Failed to parse onnx file
[06/05/2020-05:16:12] [E] Parsing model failed
[06/05/2020-05:16:12] [E] Engine creation failed
[06/05/2020-05:16:12] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=mec_model_d2.onnx

Thanks.

I am facing the same problem when i try to convert MobileArcFaceNet to TR. Is there any solution yet? I am using TRT 7.1.3 and Deepstream 5.0.
Thanks a lot.

Hi phuong1998bn,

Please help to open a new topic for your issue. Thanks