Nvinferserver + nvdspreprocess sample app for action recognition with temporal batch support

Hello,

Below is the complete information as applicable to my setup.

• Hardware Platform GPU Tesla P100
• DeepStream Version 6.1
• TensorRT Version 8.4.1
• NVIDIA GPU Driver Version 11.7
• Issue Type: questions
• Requirement details: nvinferserver with triton server and nvdspreprocess for action recognition model with temporal batching input

I have a question regarding using nvdspreprocess and nvinferserver elements together.

I have a custom action recognition model with temporal batching 5D input shape. I would like to use nvdspreporcess to preprocess the input feed and generate the 5D preprocessed output which can be fed as input to triton server using nvinferserver.

1.) Can you provide me a sample application which can have a similar approach? I have referred the 3d-action-recognition sample app which uses nvinfer and nvdspreprocess. My objective is to use nvinferserver and triton server to use libtorch_pytoch backend to use my custom action recognition pytorch model.

2.) Please confirm that we can send temporal batched 5D inputs to triton inference server after getting them from nvdspreprocess.

Please let me know if there’s anything which needs more clarification.

Thanks,
Hemang

currently the latest deepstream6.1.1 dose not support nvpreprocess + nvinferserver, only nvpreprocess + nvinfer. please refer to Gst-nvinferserver — DeepStream 6.1.1 Release documentation.

Does that mean temporal batch(NCSHW) is not supported with nvinferserver yet?

yes, nvinfersever can’t accept nvpreprocess’s tensor medata. it will do preprocss by itself, only support NCHW / NHWC, please refer to nvinferserver’s feature Gst-nvinferserver — DeepStream 6.1.1 Release documentation

Okay, Thank you for the information.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.