I am using a Jetson Nano.
I read that you can use Gstreamer to execute TensorRT engines with nvidia’s nvinfer plugin.
However, according to the DeepStream Plugin Manual (https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_details.02.01.html%23wwpID0E0IZ0HA) the nvinfer plugin “only” supports object detection, classification and segmentation but since the plugin uses TensorRT runtime engines shouldn’t I be able to execute any type of neural network that was transformed with TensorRT?
Specifically, I have a behavioral cloning neural network. It receives an input image and outputs a steering angle (It’s an autonomous driving car). Now I want to integrate this network into my gstreamer application like so:
- Get an "image" from a camera with nvarguscamerasrc
- Send the processed image to the nvinfer element (i.e. perform inference on it)
- Get the data from nvinfer and act on it (i.e. perform business logic)
The reason why I want to implement it with gstreamer is because I also have other neural networks which modify the image that is passed through the gstreamer pipeline. The behavioral cloning network has to get the unmodified camera feed, otherwise the predicion is faulty.
I know that jetson.inference exists but I don’t think I can use it in this case.
I also know how to transform the neural network to a runtime engine (https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt_401/tensorrt-api/python_api/workflows/tf_to_tensorrt.html )
My question is: Is this too complicated or is there a better way?
Is my current idea feasible? If not, why?
Could somebody please point me in the right direction?
Any help is much appreciated!