Tensorflow Resnet50 for deepstream

I am trying to get the tensorflow Resnet50 object detection model working with deepstream. I have tried to get the objectDetector_SSD example working with a Resnet50 model. I am working from the nvcr.io/nvidia/deepstream:4.0-19.07 image.

First I verified the example as specified in the README works.

Then I attempted to convert the ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03 model to uff, which failed. I modified the config as described by liuyoungshop here:
https://devtalk.nvidia.com/default/topic/1037256/tensorrt/sampleuffssd-conversion-fails-keyerror-image_tensor-/
Conversion to uff was successful after modifying the config.

However, when I try to run the gstreamer pipeline I get an error:

0:00:02.111132286  1764 0x55e82b8b80c0 ERROR                UffParser: Parser error: FeatureExtractor/resnet_v1_50/fpn/top_down/nearest_neighbor_upsampling/mul: Invalid scale mode, nbWeights: 4
> 0:00:02.117444937  1764 0x55e82b8b80c0 ERROR                nvinfer gs
tnvinfer.cpp:511:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]:generateTRTModel(): Failed to parse UFF file: incorrect file or incorrect input/output blob names

I suppose this is because nearest neighbor resize isn’t supported?

Looks like tensorflow ResizeNearest op is supported in the latest tensorrt (6.01)

https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-release-notes/tensorrt-6.html#rel_6-0-1

My questions are:

  • Has anyone had success getting other ssd object detection models to convert to uff, especially resnet50? What did you do to get it to work?
  • Has anyone had success writing plugins that use tftrt? Is there any planned support for tftrt in deepstream?
  • Is there a timeline on a docker image becoming available with tensorrt 6.01 + deepstream?

Hi,

1. YES. Please check this page for information:

2. Deepstream doesn’t support TF-TRT yet.

3. Sorry we cannot share our schedule here.

Thanks.

Thank you for the information, I’m still working on this.

I’ve been going through the gst-dsexample plugin, I thought I would be able to call my tf-trt model from C++ and so be able to modify the gst-dsexample to use my tftrt model.

It looks like the preferred way to call from C++ is to create a plan file that is fully compatible with tensorrt. I am running into issues creating a plan file as described here: https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#tensorrt-plan since some of the operators are not supported.

Is there a way to call the tf-trt model from C++, without converting to a plan file?

Hi,

First, please noticed that Deepstream doesn’t support TF-TRT yet.

In general, you will use the python interface since TF-TRT is embedded in TensorFlow framework with extra flag.
Here is a step-by-step TF-TRT tutorial for your reference: https://github.com/NVIDIA-AI-IOT/tf_trt_models

Thanks.

Thanks again.

I ended up creating a gst-dsexample plugin that runs my tensorflow model in opencv.

It seems like this strategy could be readily used with tf-trt, with a nice C++/C interface.

Which platform did you use to run your plugin?

I’m in same position as yours I want to run inference using custom tensorrt model, I’m using jetson platform and nvidia does not provide tesorflow C++ api support for jetson. I’ll have to build it from the source.

What approach did you use?

You should try deepstream 5.0, it looks like they support tensorrt out-of-the-box. I haven’t had a chance to play with it. However I was using it on a x86 platform. I also tried and got an tensorflow models working on a jetson nano, but never got tensorrt to work.

Isn’t resnet50 a classification model?