Gst-nvdsexample fullframe option seems not working

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.1.5-1+cuda11.6
• NVIDIA GPU Driver Version (valid for GPU only) 515.65.01
• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

My requirement to crop input frame based on ROI and pad cropped image with black pixel to meet model inference requirement. I have tried to use gst-nvdsexample plugin in full frame mode and updated inMat as my requirement but if I see original image is being sent for inferencing not updated one by gst-nvdsexample plugin. I have dumped jpg from gst-nvdsexample and it shows correct updated image.

Please find attached updated dsexample.cpp for reference.
gstdsexample.cpp (36.3 KB)

you can use nvpreprocss + nvinfer to do this, nvpreprocess supports ROI, nvinfer supports padding by symmetric-padding and maintain-aspect-ratio property, please refer to nvdspreprocess and nvinfer.
please refer to sample opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-preprocess-test

Hello Fanzh,

Thanks for response.

I am using nvinferserver with triton-server over grpc for inferencing. I have tried nvinfer server and it does support but I am not able to pass preprocessed input to triton-server using nvinferserver plugin.

I understood only way with triton-server is to use extraInputProcess or preprocess option of nvinferserver. is it possible to update primary input in extraInputProcess?

/**
 * override function
 * Do custom processing on extra inputs.
 * @primaryInput is already preprocessed. DO NOT update it again.

as the comments in nvdsinferserver_custom_process_yolo.cpp shown, extraInputProcess can’t do process on primaryInput.

Hello Fanzh,

Do you have any other suggestion which I can use with nvinferserver to implement ROI based inferencing with nvinferserver with external triton-server.

  1. Currently, nvprocess+nvinferserver is not supported, but it is already on our roadmap.
  2. In nvdsexample, NvBufSurface *inter_buf will not be sent to downstream plugin, please process NvBufSurface *surface directly in gst_dsexample_transform_ip.
  3. please refer to nvdsvideotemplate, it supports to create new buffer, in your case, you can save ROI data in new buffer, then send it to nvinferserver, please refer some nvdsvideotemplate samples: https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/apps/tao_others/deepstream-gaze-app

Hi Fanzh,

I have added code to update surface in get_converted_mat which is being called from gst_dsexample_transform_ip but I do not see preprocessed image sent to downstream. I have attach updated gstdsexample.cpp.

your code is processing in_mat which is related with dsexample->inter_buf, as the last comment said, n nvdsexample, NvBufSurface *inter_buf will not be sent to downstream plugin, please process NvBufSurface *surface directly in gst_dsexample_transform_ip.

DeepStream6.2 released, here is the link: NVIDIA DeepStream SDK Developer Guide — DeepStream 6.2 Release documentation
please refer to preprocess + nvinferserver sample: deepstream-3d-action-recognition

Thanks fanzh, I will try it with DS 6.2 and update.

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks

I have not yet tried. I will try by end of early next week.

thanks for the update! If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.