Does nvinferserver supports padding without resizing (enlarging small) input

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version : 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5
• NVIDIA GPU Driver Version (valid for GPU only) 625.x.x.x
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello,
Does nvinferserver plugin supports image padding without scaling input in secondary mode?

  1. what do you mean about “secondary mode”?
  2. you can use maintain_aspect_ratio and symmetric_padding to control how to pad and scale. please refer tonvinferserver. as you know, if input data 's resolution is not same with model 's , it will be scaled. what do you mean about " padding without scaling input"?

Thanks @fanzh for quick reply.

secondary_mode mean nvinferserver we use at SGIE and input process_mode will be PROCESS_MODE_CLIP_OBJECTS.

input_control {
process_mode: PROCESS_MODE_CLIP_OBJECTS
interval: 0
operate_on_gie_id: 6
operate_on_class_ids: [1]
async_mode: false
}

We want to maintain aspect ratio along with we do not want nvinferserver to upscale image but just pad with black pixel. e.g. if my clip object is 120x85 and input dimension is [700, 500, 3] then it should put 120x85 clipped object anywhere in 700x500 (HxW) without scaling 120x85 to 700x500 as it explode smaller images when scales to 700x500.

  1. if source is 120x85 and model’s input is [700, 500, 3], nvinferserver will do scaling by interpolation method. nvinferserver plugin is opensource from 6.2 version. please refer to CropSurfaceConverter::resizeBatch.
  2. we suggest to use “nvpreprocess + nvinferserver”, you can customize preprocess plugin to do preprocessing because it is opensource, then nvinferserver will accept preprocessed tensor directly. especially nvinferserver 's input-tensor-meta should be true. please refer to sample deepstream-preprocess-test and deepstream-3d-action-recognition in deepstream SDK.

there is another solution, you can use videobox after decoder to pad to the model’s resolution, here is a sample: gst-launch-1.0 videotestsrc ! video/x-raw,width=300,height=300 ! videobox left=-10 top=-100 bottom=-200 right=-300 ! xvimagesink

On two more observation, Can you please help with below observations?

  1. we are not able to use frame_scaling_filter: 3 with nvinferserver in SGIE mode with PROCESS_MODE_CLIP_OBJECTS
  2. when we use streammux interpolation-method=3 attribute and nvinferserver at PGIE using use frame_scaling_filter: 3, pipeline fails, if input at streammux is 1280x720 and streammux output resolution 1920x1080,

CUDA error at nvbufsurftransform.cpp:2817 code=-23(NPP_RESIZE_FACTOR_ERROR) “nppiResizeSqrPixel_8u_C1R (src_ptr + src_offset, jnppSrcSize, jsrc_pitch, jnppSrcROI, intermediate_buffer+dst_offset, jdst_pitch, jnppDstROI, scale_x, scale_y, jdx, jdy, nppInterFlag)”

@fanzh , I will check and update you. Thank you.

Does videobox support dynamic left, top, bottom and right attributes? can be used for below case?

In our use case, we have PGIE for face detection and we want to pad clip objects to SGIE with padding and avoid scaling.

Hello @fanzh,
We are looking at the same kind of use case that @dilip.patel is facing right now.

Kindly share your thoughts on this.

videobox dose not support properties dynamically. please double check because it is a gstreamer opensource plugin.

I will reproduce and update.

thanks for the reporting, I can reproduce this issue, we will investigate. could you share why do you need to use this NvBufSurfTransformInter_Algo2 method? Thanks! currently can you try other methods? please refer to \opt\nvidia\deepstream\deepstream6.2\sources\includes\nvbufsurftransform.h

Thanks @fanzh ,

I have tried NvBufSurfTransformInter_Bilinear, NvBufSurfTransformInter_Nearest but scaled image does not look identical to existing scale using OPENCV.

Hi @fanzh ,

I have tried to run pipeline with nvdspreprocess in secondary mode and I am seeing below error. I am using nvinferserver as SGIE plugin post nvdspreprocess plugin.

pipeline would be as below.

video src → streammux → PGIE → SGIE-1 (Conditional inferencing) → nvdspreprocess → SGIE-2 → nvmsgconv → nvmsgbroker

self.source_uri file:///tmp/test.mp4
[generic_gstreamer.py:684:create_uridecode_bin:20230529T14:07:26:INFO] Creating uridecodebin for e832b4d5-690d-49ce-a096-0b992a3a39f1 for file:///tmp/test.mp4
[stream.py:170:run:20230529T14:07:26:INFO] source added successfully : e832b4d5-690d-49ce-a096-0b992a3a39f1
WARNING: infer_proto_utils.cpp:154 auto-update preprocess.normalize.scale_factor to 1.0000
INFO: infer_grpc_backend.cpp:169 TritonGrpcBackend id:8 initialized for model: ensemble_med_trs
[generic_gstreamer.py:102:run_pipeline:20230529T14:07:29:INFO] Starting pipeline
WARNING: infer_proto_utils.cpp:154 auto-update preprocess.normalize.scale_factor to 1.0000
INFO: infer_grpc_backend.cpp:169 TritonGrpcBackend id:8 initialized for model: ensemble_med_trs
Cuda failure: status=2
[stream_manager.py:360:process_bus_message_callback:20230529T14:07:32:DEBUG] Element stream_muxer changed state from to , pending
[stream_manager.py:360:process_bus_message_callback:20230529T14:07:32:DEBUG] Element source-bin-15 changed state from to , pending
[stream_manager.py:360:process_bus_message_callback:20230529T14:07:32:DEBUG] Element pipeline0 changed state from to , pending
[stream_manager.py:318:process_bus_message_callback:20230529T14:07:32:ERROR] Error: gst-resource-error-quark: Failed to set tensor buffer pool to active (1): gstnvdspreprocess.cpp(723): gst_nvdspreprocess_start (): /GstPipeline:pipeline0/GstNvDsPreProcess:embedding_preprocessor

[stream_manager.py:323:process_bus_message_callback:20230529T14:07:32:ERROR] Error from source embedding_preprocessor
[stream_manager.py:318:process_bus_message_callback:20230529T14:07:32:ERROR] Error: gst-resource-error-quark: Failed to set buffer pool to active (1): gstnvdspreprocess.cpp(661): gst_nvdspreprocess_start (): /GstPipeline:pipeline0/GstNvDsPreProcess:embedding_preprocessor

[stream_manager.py:323:process_bus_message_callback:20230529T14:07:32:ERROR] Error from source embedding_preprocessor

config_preprocess.txt (3.8 KB)

  1. could you share more logs? please do “export GST_DEBUG=6” first to modify Gstreamer’s log level, then run again, you can redirect the logs to a file.
  2. I can’t find “embedding_preprocessor” in deepstream sdk, is this the custom code?

@fanzh ,

Please find attached requested logs.
embedding_preprocessor - It is nvdspreprocess plugin, It is name assigned to it.

debug.log (7.2 MB)

0:00:06.174858896 188189 0x16e7f30 ERROR nvdspreprocessallocator gstnvdspreprocess_allocator.cpp:111:gst_nvdspreprocess_allocator_alloc: failed to allocate cuda malloc for tensor with error cudaErrorMemoryAllocation
0:00:06.174863648 188189 0x16e7f30 WARN GST_BUFFER gstbuffer.c:951:gst_buffer_new_allocate: failed to allocate 88 bytes

from the log, there is cuda memory allocation error, can you narrow down this issue by comparing with deepstream sample deepstream-preprocess-test?