How to run inference on video in Python on TensorRT models

Description

Hi,
I followed the notebook about TAO on BodyPose Net and now I have .tlt files or .etlt / .engine for the INT8 FP16 and FP32 version of the model. I used the BodyPose Net with COCO Dataset to detect the skeleton of people and their posture.

I want to run inference on a video using the different versions of the model (unpruned, pruned_retrained, int8, fp16 and fp32) to compare the times of detection.

First, I wonder if I can use the .tlt file directly with TensorRT or do I have to convert them into .etlt or .engine?

Then, I searched some Python samples to detect the posture of someone but there is no classical classes file like you can have when you want to do object detection (like Person, Car, Bicycle, …). I only have data_pose_config and model_pose_config folders with jsons inside containing the labels of the joints (ankles, nose, neck, etc).

So, do I have to use these files in the Python code?

To sum up, I want to do what the tao bpnet inferencecommand does but in Python with TensorRT, and I don’t know where to start or what to do with my model files.

Thank you

Environment

TensorRT Version: 8.2.0.6 (python3> import tensorrt; print(tensorrt.version)
GPU Type: Tesla V100
Nvidia Driver Version: 465.19.01
CUDA Version: 11.3
CUDNN Version: 8.2
Operating System + Version: Ubuntu 18.04 x86_64
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15.3 (python3 -m pip list | grep tensorflow)
PyTorch Version (if applicable): ///
Baremetal or Container (if container which image + tag): ///

Hi ,
We recommend you to check the supported features from the below link.
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html
You can refer below link for all the supported operators list.
For unsupported operators, you need to create a custom plugin to support the operation

Thanks!

Hi,

I think I misspoke.
I wish I could find examples of how to run inference and retrieve the skeletons generated ideally in Python, if not in C++, in order to understand how to use the model generated by TAO in a Python or C++ application. So far I’ve only found examples on inference with TAO or with Tritton inference server, which I don’t want, or things that weren’t what I was looking for.
I should also perhaps point out that I don’t have a background in artificial intelligence, I’m a novice in this field.

Thanks !

Hi @thomas.gigout
Could you please refer to below link in case it’s helpful?

Thanks

Hi,

Not really

@thomas.gigout are you able to find out the solution.