• Hardware Platform: Jetson
• DeepStream Version 6.2
• JetPack Version 5.1
• Issue Type: Question
I am interested in creating a DeepStream pipeline that runs inference on a simple fully connected NN (alongside using gst-nvinfer for a CNN).
My question is what is the recommend method for implementing a plugin to do this? gst-nvinfer has a lot of image specific functionality and only supports a few types of models.
I am interested in both the suggested running of a TRT engine in a custom plugin, as well as info on transferring 1D tensors between plugins.
gst-nvinferaudio supports audio/speech inferencing.
gst-nvds3dfilter supports other types of data inferencing.
There are samples in the SDK.
Thank you, but from what I can understand from the documentation, nvinferaudio only supports RNN autoencoders and CNNs.
And I was unable to find reference to using gst-nvds3dfilter with a TRT engine.
Do you have any links for samples on using a linear or fully connected model (not CNN or RNN) in DeepStream?
DeepStream deploy any supported network with TensorRT.
If your network can be converted to ONNX, nvinferaudio can deploy it directly.
If your network needs special TensroRT parser, you can customize the TensroRT parser plugin as you like. Please refer to “custom-network-config”, “engine-create-func-name” and “custom-lib-path” with nvinferaudio. Gst-nvinferaudio — DeepStream 6.2 Release documentation The usage is the same as nvinfer.
Thank you for your response and the provided links. I have been trying use this information to implement my goal, but it has not been working.
I have attached the sample onnx model I am trying to use inference on as reference. It is 2 layers with input shape (1,4) and output shape (1,4) [HW].
fc_layer.onnx (322 Bytes)
I have attempted to run this model using both nvinfer and nvinfer audio plugins, referencing the FasterRCNN example and the deepstream-audio app.
With both of these I am getting an error similar to:
0:00:11.506744837 70111 0xaaab0a00e4f0 ERROR nvinferbase gstnvinferbase.cpp:421:gst_nvinfer_logger:<audio_classifier> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initInferenceInfo() <nvdsinfer_context_impl.cpp:1121> [UID = 1]: Infer Context default input_layer is not a image[CHW]
ERROR: Infer context initialize inference info failed, nvinfer error:NVDSINFER_TENSORRT_ERROR.
Do you have suggestions for how to progress with using the provided onnx model with deepstream inference?
Current gst-nvinferaudio only support PCM based model. To make things simple, can you use TensorRT directly instead.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.