• Hardware Platform: Jetson • DeepStream Version 6.2 • JetPack Version 5.1 • Issue Type: Question
I am interested in creating a DeepStream pipeline that runs inference on a simple fully connected NN (alongside using gst-nvinfer for a CNN).
My question is what is the recommend method for implementing a plugin to do this? gst-nvinfer has a lot of image specific functionality and only supports a few types of models.
I am interested in both the suggested running of a TRT engine in a custom plugin, as well as info on transferring 1D tensors between plugins.
DeepStream deploy any supported network with TensorRT.
If your network can be converted to ONNX, nvinferaudio can deploy it directly.
If your network needs special TensroRT parser, you can customize the TensroRT parser plugin as you like. Please refer to “custom-network-config”, “engine-create-func-name” and “custom-lib-path” with nvinferaudio. Gst-nvinferaudio — DeepStream 6.2 Release documentation The usage is the same as nvinfer.