• Hardware Platform (Jetson / GPU) GPU - GeForce RTX 4090
• DeepStream Version 6.2
• TensorRT Version 8.5.1.7
• NVIDIA GPU Driver Version (valid for GPU only) 535. 146.02
Hello! I have successfully installed Deepstream 6.2 and all other required libraries on my machine and successfully ran numerous sample applications. Now I want to integrate my ONNX model into the pipeline.
This model receives an image as an input that I would like to prepare with some scaling and affine transformations, and it outputs two small tensors with float values.
Basically what I want is a Deepstream plugin. What I learned for now:
- I need to create .so file that will encapsulate my ONNX model and input and post processing logic
- This .so file I should pass into GStreamer nvinfer plugin, that can be integrated into Deepstream pipeline
Where I am a bit confused - how to create this .so file. I have seen this beautiful tutorial about ONNX model integration, but it doesn’t quite hit the spot I need. As I understood, this post is heavily utilizing stuff that was kindly prepared by Deepstream/GStreamer developers for seamless YOLO-type models integrations.
My case is a bit different, because I have a very customized output and very customized input processing algorithm.
Could you please guide me to some materials which can be a good starting point for creating Deepstream plugin from scratch? Now I am focused on this tutorial - Using a Custom Model with DeepStream — DeepStream documentation 6.4 documentation. Something tells me this is exactly what I need. But some code bits would really help here.
Also I am going through nvdsinfer_custom_impl.h, I believe it is what I need to use to create .so plugin with my model, but I can’t seem to clearly separate where I process input, where I process output, and how the actual loading of ONNX model happens. Please let me know if I am in the right direction and if there are samples built with custom models inside.