Custom Sequence Preprocess library for NSCHW Model

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.0 (docker image: nvcr.io/nvidia/deepstream:7.0-triton-multiarch )
• NVIDIA GPU Driver Version (valid for GPU only) 535.171.04

Hi,

I am trying to integrate the Gst-nvdspreprocess plugin into my DeepStream pipeline. My downstream plugin, Gst-nvinfer, runs a model with an input shape of NSCHW (N=batch_size, S=sequence_len, C=channels, H=height, W=width).

I’ve reviewed the documentation of Gst-nvdspreprocess and DeepStream 3D Action Recognition App

I’ve also examined the sample source code at sources/apps/sample_apps/deepstream-3d-action-recognition.

I understand that I need to implement a custom_sequence_preprocess library. For reference, I’ve looked into the deepstream-3d-action-recognition example. Based on my analysis, there are three main areas in deepstream-3d-action-recognition/custom_sequence_preprocess where code modifications are necessary to accommodate the NSCHW model:

Line 239 of sequence_image_process.cpp:

// current dstPatch memory pointer is for NCDHW(NCSHW) order type. for
// other order type, User need to update accordingly.
// e.g. for NSCHW, dstPatch = (void*)(basePtr + curIdx * CHWbytes())

Line 295 of sequence_image_process.cpp:

// Copy sequence ready rois/frames into batch buffer
// this is for NCSHW(NCDHW) order only, user need replace this segment
// copy block accordingly if other orders needed.

Line 416 of sequence_image_process.cpp:

// User can replace and implement different cuda kernels for other order types.

Additionally, I believe I need to implement two new functions in sequence_preprocess_kernel.cu :
preprocessNDCHW and ImageHWCToSCHW.

However, I’m uncertain if these are the only parts of the code that need modification or if there are additional areas I should address.
Moreover, I would greatly appreciate guidance on how to correctly modify these sections to handle an NSCHW model.

Could you please assist me with this?

Thanks!

What kind of help do you need?

Almost all parts in the files under /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-action-recognition/custom_sequence_preprocess should be rewrite for the NSCHW tensor.

Hi,

I managed to modify all the parts I listed previously to handle the NSCHW model, and it works fine with the nvdspreprocess mode process-on-frame=1.

Now, I would like to know: can the deepstream-3d-action-recognition/custom_sequence_preprocess also handle the nvdspreprocess mode process-on-frame=0, for performing temporal sequence batching on object bounding boxes ?

Thanks

It may work if your implementation is correct. We have nvpreprocess + SGIE sample too. deepstream_tao_apps/apps/tao_others/deepstream-pose-classification at master · NVIDIA-AI-IOT/deepstream_tao_apps (github.com)

Thank you for the information.

I understand that nvpreprocess in combination with SGIE should work. However, my specific requirement is to confirm whether custom_sequence_preprocess can handle temporal sequence batching of object bounding boxes.

In the example provided (deepstream_tao_apps/apps/tao_others/deepstream-pose-classification), it seems to use only nvpreprocess without custom_sequence_preprocess .
Can nvpreprocess with custom_sequence_preprocess be used in combination with SGIE?

Thanks

Surely it can.

Both deepstream_tao_apps/apps/tao_others/deepstream-pose-classification at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub and /opt/nvidia/deepstream/deepstream/apps/sample_apps/deepstream-3d-action-recognition/custom_sequence_preprocess impelment the “custom_tensor_function” inetrface of the nvdspreprocess plugin. Gst-nvdspreprocess (Alpha) — DeepStream documentation 6.4 documentation.

The deepstream_tao_apps/apps/tao_others/deepstream-pose-classification at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub shows how to implement the interface for sequence batch tensor with SGIE; the /opt/nvidia/deepstream/deepstream/apps/sample_apps/deepstream-3d-action-recognition/custom_sequence_preprocess shows how to impement the interface for sequence batched tensor with PGIE.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.