Use custom behavior cloning neural network with nvinfer

Hello!

I am using a Jetson Nano.
I read that you can use Gstreamer to execute TensorRT engines with nvidia’s nvinfer plugin.
However, according to the DeepStream Plugin Manual (https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_details.02.01.html%23wwpID0E0IZ0HA) the nvinfer plugin “only” supports object detection, classification and segmentation but since the plugin uses TensorRT runtime engines shouldn’t I be able to execute any type of neural network that was transformed with TensorRT?

Specifically, I have a behavioral cloning neural network. It receives an input image and outputs a steering angle (It’s an autonomous driving car). Now I want to integrate this network into my gstreamer application like so:

  1. Get an "image" from a camera with nvarguscamerasrc
  2. Send the processed image to the nvinfer element (i.e. perform inference on it)
  3. Get the data from nvinfer and act on it (i.e. perform business logic)

The reason why I want to implement it with gstreamer is because I also have other neural networks which modify the image that is passed through the gstreamer pipeline. The behavioral cloning network has to get the unmodified camera feed, otherwise the predicion is faulty.

I know that jetson.inference exists but I don’t think I can use it in this case.
I also know how to transform the neural network to a runtime engine (https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt_401/tensorrt-api/python_api/workflows/tf_to_tensorrt.html )

My question is: Is this too complicated or is there a better way?
Is my current idea feasible? If not, why?

Could somebody please point me in the right direction?
Any help is much appreciated!

Hi,

DeepstreamSDK targets for multimedia pipeline so that most of our samples focus on the image/camera problem.

Based on your description, your use case can adopt the nvinfer component directly.
You can try to access the GST buffer of nvinfer and apply the corresponding post processing based on your algorithm directly.

Thanks.

Hi,

thanks for the quick response and the great news.

Just one more question: Do I have to modify the nvinfer configuration file in a specific way to reflect that my network is different or are there any pitfalls? Specifically, what network-type do I have to set in the nvinfer configuration file or do I just leave it blank?

Thanks.

Hi,

We have a sample to demonstrate how to access nvinfer’s output tensor data via Deepstream.
You can check it for more information:
/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test

For the network-type, please set it to 100, which reserves for general model type.
Ex. dstensor_pgie_config.txt

[property]
...
## 0=Detector, 1=Classifier, 2=Segmentation, 100=Other
network-type=100
# Enable tensor metadata output
output-tensor-meta=1

Thanks.