Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): jetson • DeepStream Version:6.3 • JetPack Version (valid for Jetson only):R35 4.1 • TensorRT Version:8.5.2.2 • Issue Type( questions, new requirements, bugs):questions
Q1:When the nvstreammux plugin is connected to the nvinfer plugin, my model supports an image input size of 224 * 224. However, the width and height properties of the input stream set by nvstreammux are 1920 * 1080, respectively. Nevertheless, the pipeline can still run in this situation. So, what is the internal operation of converting a 1920 * 1080 stream to an image size of 224 * 224? Please describe it in detail。thank you~~
Q2:When I ran the ResNet50 classification model using Nvidiar, I utilized a custom model post-processing function with the following configuration:
The post-processing function is a routine provided by Deepstream, located at**/opt/Nvidia/devstream/devstream-6.3/sources/libs/nvdsinferr_customparser/nvdsinferr_customclassifier parser.cpp**. As shown in the figure, I obtained the classification results and probabilities by adding printing. How can I add the above classification results and probabilities to the corresponding frame_ceta?
We’ll scale that in the nvinfer plugin. Since it’s open source, you can refer to the source code directly. This is the diagram of the nvinfer source code.
thanks!!! another questions: Question 1: What is the principle of the nvstreammux plugin? Set the plugin property to 224 * 224 and the input video stream to 1280 * 720. How will the video stream be processed? Is it directly resized? Do you have code? Question 2: When deploying a model with input size of 224 * 224, I found that setting the width and height of the streammux plugin to be the same as the input size of the model would result in higher inference accuracy, while setting the width and height to other values would result in lower inference accuracy. I suspect that streammux’s method of processing video frames is resizing, which is the same as the preprocessing of my model’s inference images. However, when the streammux size is set to be different from the input size of the model, the internal processing of Nvidia is different from direct resizing, which leads to this phenomenon. Is my guess correct?
The Gst-nvstreammux plugin forms a batch of frames from multiple input sources. Its main function is to forms a batch of frames. Please refer to our Guide DS_plugin_gst-nvstreammux.
You can refer to our FAQ to tune the parameters of the plugin to improve the accuracy. Our suggestion is that set the width and height paremeters to align with your source video for the nvstreammux.