@Fiona.Chen
Thank you so much! this guides me where I should focus on for the implementation.
Cause I’ve just begun working on the deepstream 6.1, I need to clarify a few things more…
I’ve found Nvidia’s benchmark model of super resolution from GitHub - NVIDIA-AI-IOT/jetson_benchmarks: Jetson Benchmark – e.g., “super_resolution_bsd500-bs1.onnx” and I converted this into an engine, “super_resolution_bsd500-bs1.engine” using Jetson AGX Orin.
(The training code is GitHub - dusty-nv/super-resolution: PyTorch super resolution model with RGB support and ONNX exporter).
Can I just use this engine model for a simple implementation, for testing purpose? Then, is there a way to test this model in the pipeline without configuring model related parameters? It seems like Nvidia is testing this model so I can test this in my pipeline easily with default parameters, somehow.
Another question is about the pipeline. I am unclear regarding how to construct the pipeline using your suggestion for multiple models in inference and build the frames again: For example, currently I’m having the following pipeline:
“gst-launch-1.0 filesrc location=./sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer name=detect_object config-file-path= ./my_custom_detection_configuration.txt ! nvvideoconvert ! dsexample full-frame=0 ! nvdsosd ! nvegltransform ! nveglglessink”
→ In this pipeline, the nvinfer custom model detects object for now.
Let’s say that I implemented two custom TensorRT inferencing modules as you guided.
How do I test this in the pipeline?
For example, should I use a pipeline like the following?
“gst-launch-1.0 filesrc location=./sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 !
nvinfer name=super_resolution config-file-path= ./my_custom_super_resolution_configuration.txt !
nvdsvideotemplate customlib-name=“libcustom_impl.so” !
nvinfer name=detect_object config-file-path= ./my_custom_detection_configuration.txt ! nvvideoconvert ! dsexample full-frame=0 ! nvdsosd ! nvegltransform ! nveglglessink”
Will this pipeline work? or is there a better way to include multiple plugins in the pipeline? I need to know how to debug the implemented c/c++ parts in my pipeline so please let me know how to achieve this if possible.
The last question is how do I know what parameters are required for the super-resolution. You said I can refer to the repo to get these parameters, but I need to know what parameters are required by the nvdsinfer or nvdsvideotemplate. Are these parameters used only for nvdsinfer?
Any comments or feedback will appreciated!
Thanks a lot.