Issue Type: Question
I just started working on Jetson AGX Orin, trying to build up a pipeline dealing with two major functions using machine learning models – (1) up-scaling(super-resolution) on low-resolution video/images, and then (2) object detection on the up-scaled video frames/images. If anybody knows some related documentations / blogs / samples, please share.
Since I just started working on this platform, it would be great to get some help on how to apply my own ML-model in the plugin, Gst-nvdsvideotemplate for up-scaling the frames/images that later being used for the detection of object by a ML model like “Yolo”. My question is about how to apply my up-scaling (super-resolution) ML model to the video frames/images before applying another ML-model for object detection in the pipeline.
So far, I figured out that we can do some modification on the video frames/images using the plugin, Gst-nvdsvideotemplate, and looked into the customlib_impl.cpp (under /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdsvideotemplate/customlib_impl) to see how I can upscale the pixels using my ML model (.onnx file).
At this point, I’m puzzled what to do/use. Well, according to the Deepstream documents, we can use Gst-nvinfer for the inference/ML-based approach, so I looked at the documentation (Gst-nvinfer — DeepStream 6.2 Release documentation). HOWEVER, it only allows the following ML models:
- Multi-class object detection
- Multi-label classification
- Segmentation (semantic)
- Instance Segmentation
So, I cannot modify Gst-nvinfer for loading my up-scaling/super-resolution ML model to use.
[Question 1] Can I still modify the plugin, Gst-nvinfer for up-scaling the frames? it seems like a lot of work if I modify Gst-nvinfer, or should I create a plugin likt Gst-nvinfer for my purpose?
[Question 2] If I have to implement my own own ML loading/inference functions, where should I implement these functions, and how can I use these implemented functions in the pipeline for up-scaling the video frames / images? → I thought that I may implement the ML-model loading/inference functions in the customlib_impl.cpp lib file for Gst-nvdsvideotemplate, but I’m not sure about this. please clarify.
[Question 3] Is there any other plugin I can load /do inference using my own up-scaling(super-resolution) ML model? (not Gst-nvinfer because Gst-nvinfer does not deal with up-scaling/resolution). Probably there is a plugin I can easily load/use a ML-model for up-scaling/super-resolution video/images…
The following is the complete environment setup:
**• Hardware Platform: Jetson AGX Orin
**• DeepStream Version: 6.1
**• JetPack Version: 5.0.2 (L4T R35.1.0)
**• TensorRT Version: 8.4.1
**• NVIDIA GPU Driver Version: CUDA11.4
**• Requirement details
The plugins: (1) Gst-nvdsvideotemplate (mandatory), (2) Gst-nvinfer (optional)
(1) Gst-nvdsvideotemplate — DeepStream 6.2 Release documentation
(2) Gst-nvinfer — DeepStream 6.2 Release documentation