Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU). Nvidia T4 GPU • DeepStream Version - 5.1 and 6.0 • JetPack Version (valid for Jetson only) – • TensorRT Version. 8.0.1-1+cuda11.3 • NVIDIA GPU Driver Version (valid for GPU only) 460.32.03 • Issue Type( questions, new requirements, bugs) - QUESTION • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Indicates whether to pad image symmetrically while scaling input. DeepStream pads the images asymmetrically by default.
This suggests that there is a way to pad images asymmetrically in deepstream - I imagine bottom-right padding is the default. Unfortunately running this pipeline:
Unfortunately, symmetric-padding flag does not work. The output is: Unknown or legacy key specified 'symmetric-padding' for group [property]
and the output video is squashed instead of padded. when enable-padding flag is enabled on streammux element the padding is done symetrically. Is there a way to do it asymmetrically in deepstream ?
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi, I have checked out the source code - it seems that container nvcr.io/nvidia/deepstream:6.0-ea-21.06-devel does not support this flag but the proper deepstream-6.0 container - nvcr.io/nvidia/deepstream:6.0-devel does. I have tried to switch this flag on and off in infer_config.txt but it does not seem to have any effect.
The question persists - how to enable asymmetric padding in deepstream ?
I have read the source code of nvinfer plugin. Seems that input-tensor-meta variable set to TRUE (by default) blocks any changes to images in buffer. When trying to set input-tensor-meta to FALSE or 0 deepstream prints out:
Unknown or legacy key specified 'input-tensor-meta' for group [property]
Can you tell me how to enable asymmetric padding in deepstream instead of reading source-code?
This is my pipeline inside of nvcr.io/nvidia/deepstream:6.0-devel container which you can easily replicate on your side (all needed resources are already inside of this container) :
root@9329ce241db4:/mounted# gst-launch-1.0 nvstreammux name=mux batch-size=1 width=608 height=342 ! queue ! nvinfer config-file-path=/mounted/weird_config.txt input-tensor-meta=0 batch-size=1 ! queue ! nvstreamdemux name=demux \
> uridecodebin uri=file:///opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4 ! nvvideoconvert ! queue ! mux.sink_0 \
> demux.src_0 ! queue ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=/mounted/data/images/deepstream-streammux-set/src_0/horizontal.mp4
(gst-plugin-scanner:14): GStreamer-WARNING **: 12:26:22.224: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
(gst-plugin-scanner:14): GStreamer-WARNING **: 12:26:22.233: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
Unknown or legacy key specified 'input-tensor-meta' for group [property]
Setting pipeline to PAUSED ...
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1484 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:02.662365186 13 0x55eff8ba8160 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:02.662424770 13 0x55eff8ba8160 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:02.662464525 13 0x55eff8ba8160 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
0:00:14.212304120 13 0x55eff8ba8160 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x608x608
1 OUTPUT kFLOAT conv2d_bbox 16x38x38
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x38x38
0:00:14.216990067 13 0x55eff8ba8160 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:/mounted/weird_config.txt sucessfully
Pipeline is PREROLLING ...
WARNING: from element /GstPipeline:pipeline0/GstNvStreamMux:mux: Rounding muxer output height to the next multiple of 4: 344
Additional debug info:
gstnvstreammux.c(2803): gst_nvstreammux_change_state (): /GstPipeline:pipeline0/GstNvStreamMux:mux
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:02.471190176
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
I am doing a really simple pipeline and trying to enable simple thing in accordance with documentation - I don’t understand why is it so hard to enable such a simple feature?
I know that symmetric-padding does not take effect because as you can see in the pipeline I am saving the frames that enter and leave nvinfer element as video-file - I have tried various configurations - you can try them yourself and verify my claims.
Can you explain how to use nvdspreprocess element in this pipeline to enable asymmetric padding?
ok, so in that scenario streammux element should only be responsible for batching - as you can see I have two video sources - one vertical and one horizontal - output resolution of streammux must be specified (so it must be either vertical or horizontal) - how to overcome this issue ?