Inference is so slow with torch1.6

Hi @AastaLLL, Thanks for your reply. I converted as follows

model_trt = torch2trt(model, [img])

Couldn’t find how to print log, but found that fp16_mode=False was the default. After setting it true as below I get FPS 201.

model_trt = torch2trt(model, [img], fp16_mode=True)

I find it’s not straightforward to deploy torch/detectron2 Faster RCNN/Mask RCNN models on Xavier NX. But I will be waiting for your findings as you said here.

Now I am also checking for other options. I am a bit confused after finding so many options.

I am looking for a pipeline to deploy object detectors/keypoint detectors on my Jetson Xavier NX. My model should be able to process in real-time. I have to run this project for a long time. Later on, I have to deploy an instance segmentation model on HPC with Nvidia cards for another project.

Can you please suggest me which route should I go?