Deepstream 6.0 Python App - Custom Parser for SSD

I have some experience with jetson inference on Jetson Xavier NX. I also have a few models (SSDMobileNet v2) trained using jetson inference. Now, I am migrating to deepstream. This is my pipeline.
nvarguscamerasrc → nvstreammux → nvvideoconvert → nvinfer → nvdsosd → nvegltransform → nveglglesskink
Everything is working fine except nvinfer. I know nvinfer requires a configuration file. I am using onnx models trained using jetson inference. I am not clear about the custom parser implementation. Could you please provide me a sample config file for performing inference on onnx models trained using jetson inference?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform: Jetson Xavier NX (Developer Kit Version)
DeepStream Version: 6.0
JetPack Version: 4.6
TensorRT Version: 8.0.1
NVIDIA GPU Driver Version: X Driver 32.6.1
CUDA Version: 10.2
Issue Type: Questions
Requirement details:
I have some experience with jetson inference on Jetson Xavier NX. I also have a few models (SSDMobileNet v2) trained using jetson inference. Now, I am migrating to deepstream. This is my pipeline.
nvarguscamerasrc → nvstreammux → nvvideoconvert → nvinfer → nvdsosd → nvegltransform → nveglglesskink
Everything is working fine except nvinfer. I know nvinfer requires a configuration file. I am using onnx models trained using jetson inference. I am not clear about the custom parser implementation. Could you please provide me a sample config file for performing inference on onnx models trained using jetson inference?

Does below sample helps?

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Python_Sample_Apps.html#sample-application-source-details

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.