Parsing output of PeopleSegNet with Tensorrt in python

Hi,

I am trying to run inference on PeopleSegNet tensorrt engine file that i was able to get by using tlt-converter tool on the pretrained .etlt model i got from here
on Jetson xavier AGX.

I had no problem with the conversion, now i am trying to run the inference on the exported engine file in python.
But i can’t seem to make sense of the output, i believe i might be doing the pre-processing or post-processing wrong.

I am not sure what i am doing wrong though.

This is the script i use for inference in python
PeopleSegNet_trt.py (1.1 KB)

Any help would be appreciated

P.S that i ran trtexec on the engine file and it ran fine, i also tried it with deepstream and it worked fine as well, which is why i believe the problem is in the pre-processing or post-processing.

In addition, I read here about the output shapes but i still don’t know how those outputs can be converted or processed in any way to be usable directly with opencv to draw the bounding boxes and masks.

For preprocessing, it will keep aspect ratio and then resize and pad. Reference:Inferring Yolo_v3.trt model in python - #26 by Morganh

For how to process the trt engine, when you read GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream, you can also refer to other user’s experience in Inferring Yolo_v3.trt model in python (for yolo) and Inferring detectnet_v2 .trt model in python - #46 by Morganh (for yolo and detectnet_v2) and Run PeopleNet with tensorrt - #21 by carlos.alvarez (for detectnet_v2)

Thanks alot