Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU • DeepStream Version : 6.2 • JetPack Version (valid for Jetson only) • TensorRT Version 8.5 • NVIDIA GPU Driver Version (valid for GPU only) 625.x.x.x • Issue Type( questions, new requirements, bugs) questions • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hello,
Does nvinferserver plugin supports image padding without scaling input in secondary mode?
you can use maintain_aspect_ratio and symmetric_padding to control how to pad and scale. please refer tonvinferserver. as you know, if input data 's resolution is not same with model 's , it will be scaled. what do you mean about " padding without scaling input"?
We want to maintain aspect ratio along with we do not want nvinferserver to upscale image but just pad with black pixel. e.g. if my clip object is 120x85 and input dimension is [700, 500, 3] then it should put 120x85 clipped object anywhere in 700x500 (HxW) without scaling 120x85 to 700x500 as it explode smaller images when scales to 700x500.
if source is 120x85 and model’s input is [700, 500, 3], nvinferserver will do scaling by interpolation method. nvinferserver plugin is opensource from 6.2 version. please refer to CropSurfaceConverter::resizeBatch.
we suggest to use “nvpreprocess + nvinferserver”, you can customize preprocess plugin to do preprocessing because it is opensource, then nvinferserver will accept preprocessed tensor directly. especially nvinferserver 's input-tensor-meta should be true. please refer to sample deepstream-preprocess-test and deepstream-3d-action-recognition in deepstream SDK.
there is another solution, you can use videobox after decoder to pad to the model’s resolution, here is a sample: gst-launch-1.0 videotestsrc ! video/x-raw,width=300,height=300 ! videobox left=-10 top=-100 bottom=-200 right=-300 ! xvimagesink
On two more observation, Can you please help with below observations?
we are not able to use frame_scaling_filter: 3 with nvinferserver in SGIE mode with PROCESS_MODE_CLIP_OBJECTS
when we use streammux interpolation-method=3 attribute and nvinferserver at PGIE using use frame_scaling_filter: 3, pipeline fails, if input at streammux is 1280x720 and streammux output resolution 1920x1080,
CUDA error at nvbufsurftransform.cpp:2817 code=-23(NPP_RESIZE_FACTOR_ERROR) “nppiResizeSqrPixel_8u_C1R (src_ptr + src_offset, jnppSrcSize, jsrc_pitch, jnppSrcROI, intermediate_buffer+dst_offset, jdst_pitch, jnppDstROI, scale_x, scale_y, jdx, jdy, nppInterFlag)”
thanks for the reporting, I can reproduce this issue, we will investigate. could you share why do you need to use this NvBufSurfTransformInter_Algo2 method? Thanks! currently can you try other methods? please refer to \opt\nvidia\deepstream\deepstream6.2\sources\includes\nvbufsurftransform.h
I have tried NvBufSurfTransformInter_Bilinear, NvBufSurfTransformInter_Nearest but scaled image does not look identical to existing scale using OPENCV.
I have tried to run pipeline with nvdspreprocess in secondary mode and I am seeing below error. I am using nvinferserver as SGIE plugin post nvdspreprocess plugin.
self.source_uri file:///tmp/test.mp4
[generic_gstreamer.py:684:create_uridecode_bin:20230529T14:07:26:INFO] Creating uridecodebin for e832b4d5-690d-49ce-a096-0b992a3a39f1 for file:///tmp/test.mp4
[stream.py:170:run:20230529T14:07:26:INFO] source added successfully : e832b4d5-690d-49ce-a096-0b992a3a39f1
WARNING: infer_proto_utils.cpp:154 auto-update preprocess.normalize.scale_factor to 1.0000
INFO: infer_grpc_backend.cpp:169 TritonGrpcBackend id:8 initialized for model: ensemble_med_trs
[generic_gstreamer.py:102:run_pipeline:20230529T14:07:29:INFO] Starting pipeline
WARNING: infer_proto_utils.cpp:154 auto-update preprocess.normalize.scale_factor to 1.0000
INFO: infer_grpc_backend.cpp:169 TritonGrpcBackend id:8 initialized for model: ensemble_med_trs Cuda failure: status=2
[stream_manager.py:360:process_bus_message_callback:20230529T14:07:32:DEBUG] Element stream_muxer changed state from to , pending
[stream_manager.py:360:process_bus_message_callback:20230529T14:07:32:DEBUG] Element source-bin-15 changed state from to , pending
[stream_manager.py:360:process_bus_message_callback:20230529T14:07:32:DEBUG] Element pipeline0 changed state from to , pending [stream_manager.py:318:process_bus_message_callback:20230529T14:07:32:ERROR] Error: gst-resource-error-quark: Failed to set tensor buffer pool to active (1): gstnvdspreprocess.cpp(723): gst_nvdspreprocess_start (): /GstPipeline:pipeline0/GstNvDsPreProcess:embedding_preprocessor
[stream_manager.py:323:process_bus_message_callback:20230529T14:07:32:ERROR] Error from source embedding_preprocessor
[stream_manager.py:318:process_bus_message_callback:20230529T14:07:32:ERROR] Error: gst-resource-error-quark: Failed to set buffer pool to active (1): gstnvdspreprocess.cpp(661): gst_nvdspreprocess_start (): /GstPipeline:pipeline0/GstNvDsPreProcess:embedding_preprocessor
[stream_manager.py:323:process_bus_message_callback:20230529T14:07:32:ERROR] Error from source embedding_preprocessor
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks
0:00:06.174858896 188189 0x16e7f30 ERROR nvdspreprocessallocator gstnvdspreprocess_allocator.cpp:111:gst_nvdspreprocess_allocator_alloc: failed to allocate cuda malloc for tensor with error cudaErrorMemoryAllocation
0:00:06.174863648 188189 0x16e7f30 WARN GST_BUFFER gstbuffer.c:951:gst_buffer_new_allocate: failed to allocate 88 bytes
from the log, there is cuda memory allocation error, can you narrow down this issue by comparing with deepstream sample deepstream-preprocess-test?
if the object resolution is bigger than model’s resoluton, how do you deal with this case?