I am trying to run inference on PeopleSegNet tensorrt engine file that i was able to get by using tlt-converter tool on the pretrained .etlt model i got from here
on Jetson xavier AGX.
I had no problem with the conversion, now i am trying to run the inference on the exported engine file in python.
But i can’t seem to make sense of the output, i believe i might be doing the pre-processing or post-processing wrong.
I am not sure what i am doing wrong though.
This is the script i use for inference in python
PeopleSegNet_trt.py (1.1 KB)
Any help would be appreciated
P.S that i ran trtexec on the engine file and it ran fine, i also tried it with deepstream and it worked fine as well, which is why i believe the problem is in the pre-processing or post-processing.