Below is the complete information as applicable to my setup.
• Hardware Platform GPU Tesla P100
• DeepStream Version 6.1
• TensorRT Version 8.4.1
• NVIDIA GPU Driver Version 11.7
• Issue Type: questions
• Requirement details: nvinferserver with triton server and nvdspreprocess for action recognition model with temporal batching input
I have a question regarding using nvdspreprocess and nvinferserver elements together.
I have a custom action recognition model with temporal batching 5D input shape. I would like to use nvdspreporcess to preprocess the input feed and generate the 5D preprocessed output which can be fed as input to triton server using nvinferserver.
1.) Can you provide me a sample application which can have a similar approach? I have referred the 3d-action-recognition sample app which uses nvinfer and nvdspreprocess. My objective is to use nvinferserver and triton server to use libtorch_pytoch backend to use my custom action recognition pytorch model.
2.) Please confirm that we can send temporal batched 5D inputs to triton inference server after getting them from nvdspreprocess.
Please let me know if there’s anything which needs more clarification.