Deploy custom object detection tf2 model

Hello Nvidia!

Description

Looking for the best option to convert tensorflow2.0 model to onnx/else to be possible to run it with gstreamer/deepstream.

What should be done:
Download pretrained ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu, retrain it on simple dataset by using tensorflow 2.7.0 and convert it to format acceptable by deepstream/ gstreamer. (To verify it the simples way to use gst-launch-1.0)

What I was trying to do:

I have downloaded pretrained ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8 model and I have retrained it using object detection api:
https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html#downloading-the-tensorflow-model-garden

To prepare onnx model I have used the tutorial:
https://github.com/pskiran1/TensorRT-support-for-Tensorflow-2-Object-Detection-Models
And It works well.
The retrained model works as expected.

Finally I have converted model.onnx, what is using by infer.py to create outputs.
The infer.py works ok, I’m getting right bounding boxes on my pictures.

The next step is to use the model with gstreamer/deepstream.

I was using the pipeline prepared based on example “deepstream-test1” - of course it’s working.
After changing the config.txt to support onnx model I see below errors:

0:00:00.156822272 23035 0x56001f015b00 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:00.156953056 23035 0x56001f015b00 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
[2021-12-07 11:54:17,786] [INF] [pipeline.py:113] Timer fired, sending EOS
0:00:44.910111525 23035 0x56001f015b00 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /workspace/data/model/onnx_model/onnx.onnx_b1_gpu0_fp16.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT input_tensor:0  640x640x3       
1   OUTPUT kINT32 num_detections  1               
2   OUTPUT kFLOAT detection_boxes 100x4           
3   OUTPUT kFLOAT detection_scores 100             
4   OUTPUT kINT32 detection_classes 100             

0:00:44.917205346 23035 0x56001f015b00 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:964> [UID = 1]: RGB/BGR input format specified but network input channels is not 3
ERROR: nvdsinfer_context_impl.cpp:1267 Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:44.920322106 23035 0x56001f015b00 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:44.920338396 23035 0x56001f015b00 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary-inference> error: Config file path: /workspace/data/model/config/onnx_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[2021-12-07 11:54:31,603] [ERR] [pipeline.py:142] Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: /workspace/data/model/config/onnx_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

As I understand, the problem is nhwc format - so it could be enough to change model to nchw.
Is it possible to do it?

If not, how to use onnx with deepstream?
If yes, could you provide some tutorials/links/tools how to do it?
If not - what format should I use to convert mobilenetv2 trained by tensorflow to be usable with deepstream?

Environment

TensorRT Version: 8.0.1-1/8.2.1-1
GPU Type: RTG2000
Nvidia Driver Version: 470.86
CUDA Version: 11.3/11.4
CUDNN Version:
Operating System + Version: Ubuntu 1804/2004
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): deepstream:6.0-devel

I have added different versions for TensorRT, cuda and ubuntu - i was testing many combinations.

BR,
BonTo

Can you share the nvinfer configuration used?

Sure,
pipeline.config (4.3 KB)

I have pushed it and I have solved above error.
I did converted the model and I have the nchw onnx model.

To solve it I have just added --inputs-as-nchw input0:0 in tf2onnx.convert function.

Unfortunately I have met some different problem, I did describe it here:

Of course I can also append the model what I’m trying to convert by using TensorRT.

onnx-tensorrt/operators.md at master · onnx/onnx-tensorrt · GitHub
Please try NVES’s suggestion.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.